Living Autonomously

22 Jan 2026 12:15h - 13:00h

Session at a glance

Summary

This panel discussion at a Business Insider event focused on the current state and future of robotics, featuring three experts: Jake Loosararian from Gecko Robotics, Daniela Rus from MIT’s AI Lab, and Shao Tianlan from MechMind Robotics. The conversation centered on recent advances in “physical AI” – robots that can understand and interact with the physical world through improved sensors, data collection, and artificial intelligence.


The panelists highlighted significant progress over the past year, with Tianlan noting that his company delivered over 10,000 intelligent robots in 12 months, more than their previous eight years combined. Rus emphasized breakthroughs in co-designing robot bodies and brains together, allowing custom robots to be designed and manufactured within days rather than months. Loosararian focused on how AI has driven demand for better data collection from physical environments, leading to robots that can diagnose infrastructure health and predict failures.


A key theme was the distinction between current capabilities and public expectations. The experts noted that while robots excel at structured tasks like warehouse operations and container movement, manipulation in unstructured environments remains challenging due to limitations in tactile sensing – robots lack the equivalent of human skin and fingertips. They emphasized that many impressive robot demonstrations still rely on human teleoperation rather than full autonomy.


The discussion addressed safety concerns and deployment strategies, with panelists advocating for gradual implementation in controlled environments like factories and logistics centers before attempting more complex human-interactive applications. They agreed that while significant technical challenges remain, particularly in manipulation and sensing, the foundational infrastructure for widespread robot deployment is rapidly advancing, making them optimistic about progress in the coming years.


Keypoints

Major Discussion Points:

Recent Breakthroughs in Physical AI and Robotics: The panelists discussed exciting developments in the past 12 months, including hyper-focus on ROI-driven AI applications, co-designing robot bodies and brains using AI, and the rapid acceleration of intelligent robot deployment (with one company delivering 10,000+ units in one year versus 8 years for the first 10,000).


Technical Challenges and Current Limitations: Key obstacles include the need for better manipulation capabilities, lack of skin-like sensors for dexterous tasks, the “long tail” of unstructured environments, and the reality that many impressive robot demonstrations still rely heavily on teleoperation rather than true autonomy.


Safety and Deployment Strategies: Discussion of how to safely deploy robots by establishing clear boundaries and rules, focusing initially on controlled environments like factories and logistics centers rather than direct human interaction scenarios, and implementing graduated autonomy rather than waiting for perfect humanoid capabilities.


The Role of Data and Learning: Emphasis on robots gathering previously unavailable data sets to drive business ROI, the importance of learning from demonstrations (how humans naturally teach), and developing “physical AI” that understands physics and can continue learning after deployment rather than being frozen after training.


Future Timeline and Expectations: Debate over realistic timelines for robot capabilities, with some optimism about near-term applications in structured environments while acknowledging that general-purpose home robots remain “a long way away,” and discussion of whether current progress exceeds or falls short of expectations from 10 years ago.


Overall Purpose:

The discussion aimed to explore the current state and future prospects of robotics and AI, focusing on practical applications, technical challenges, and the realistic timeline for robots becoming integrated into various aspects of work and daily life.


Overall Tone:

The tone was optimistic yet realistic throughout. The panelists demonstrated enthusiasm about recent breakthroughs and near-term possibilities while maintaining scientific honesty about current limitations and challenges. There was a consistent effort to balance excitement about the technology’s potential with practical considerations about safety, deployment challenges, and realistic timelines. The conversation remained collaborative and educational, with panelists building on each other’s points rather than disagreeing significantly.


Speakers

Jamie Heller – Editor-in-Chief of Business Insider, panel moderator


Jake Loosararian – Co-founder and Chief Executive Officer of Gecko Robotics, specializes in robotics for infrastructure inspection and data collection


Daniela Rus – Director of Computer Science and Artificial Intelligence Laboratory at MIT, researcher in robotics and AI, associated with companies Liquid AI and Venti Technologies


Shao Tianlan – Founder and Chief Executive Officer of MechMind Robotics, focuses on intelligent robots and 3D vision technology


Audience – Various audience members including:


– Vanessa Mendez from Houston, Texas (startup focusing on drone automation and visual AI for solar industry)


– Miguel from Spain (physics PhD student)


– Giulia from Italy (researcher in human-robot interaction, global shaper)


Additional speakers:


None identified beyond those in the speakers names list.


Full session report

Comprehensive Summary: The Future of Robotics and Physical AI

Introduction and Context

This panel discussion at a Business Insider event brought together three leading experts in robotics to examine the current state and future prospects of the field. Moderated by Jamie Heller, Editor-in-Chief of Business Insider, the conversation featured Jake Loosararian, Co-founder and Chief Executive Officer of Gecko Robotics; Daniela Rus, Director of Computer Science and Artificial Intelligence Laboratory at MIT; and Shao Tianlan, Founder and Chief Executive Officer of MechMind Robotics. The discussion centered on recent advances in “physical AI” – robots that can understand and interact with the physical world through improved sensors, data collection, and artificial intelligence.


Recent Breakthroughs and Industry Transformation

The panelists highlighted remarkable progress in robotics over the past year, with all three speakers demonstrating strong consensus on the transformative nature of recent developments. Tianlan provided perhaps the most striking evidence of this acceleration, noting that his company delivered their second batch of 10,000 intelligent robots in just one year, compared to eight years for their first 10,000 units. This dramatic scaling represents a fundamental shift in the industry’s capacity for deployment and adoption.


Rus emphasized breakthroughs in co-designing robot bodies and brains together using AI, allowing custom robots to be designed and manufactured within hours rather than months. This represents a paradigm shift from traditional robotics development, where hardware and software were developed separately. The integration of AI into the design process itself has enabled unprecedented customization and rapid prototyping. Rus also leads two companies applying these advances: Liquid AI, which brings physical AI models to business applications, and Venti Technologies, which automates port operations.


Loosararian focused on how AI has driven demand for better data collection from physical environments, leading to what he termed “physical AI” – specifically robots that can diagnose the health of the built world, including bridges, power plants, ships, and other critical infrastructure. His approach emphasizes return on investment through data-driven decision-making capabilities, marking a shift from robots as mere mechanical tools to intelligent data collection and analysis systems.


All three experts agreed that the robotics industry has undergone a complete transformation from what Loosararian described as historically bleak prospects. He noted that at Y Combinator, “if you were doing a robotics company, it was basically the same as choosing to die a slow death or a fast death.” This transformation has been enabled by advances across multiple domains: materials science, hardware miniaturization, computational power, data availability, and artificial intelligence algorithms.


Technical Framework and Current Capabilities

Tianlan provided a useful framework for understanding robot capabilities, identifying three major directions in robot abilities: navigation, locomotion, and manipulation. This organizational structure helps clarify where progress has been made and where challenges remain.


Navigation capabilities have seen significant advancement, with robots now able to move effectively through structured and semi-structured environments. Locomotion has similarly improved, with robots demonstrating increased mobility across various terrains and conditions.


However, manipulation remains the most significant challenge. Rus provided a particularly vivid explanation of this problem, comparing robot materials to human anatomy: “So robots are made primarily from hard plastics and metal. And those materials are more like fingernails than the skin and flesh we have on our fingertips… just try manipulating your phone with your fingernails, not with your skin.” This analogy effectively illustrated why robots struggle with tasks that humans find trivial.


Current Limitations and Honest Assessments

Despite the optimism about recent progress, the panelists demonstrated remarkable honesty about current technical limitations. The most significant area of agreement concerned manipulation capabilities, with both Rus and Tianlan acknowledging this as the primary technical challenge facing the field.


However, the panelists disagreed on the implications of these limitations. While Rus emphasized that manipulation remains the biggest challenge due to lack of skin-like sensors for dexterous tasks, Tianlan argued that robots don’t need human-level dexterity to perform many useful tasks effectively. To support his position, Tianlan described a personal experiment: “I personally have tried to live with only three fingers and only do this action. I personally have tried that for several hours. Life is just fine… so this, I think, justifies my opinion that we don’t need, say, human level dexterity to enable robot to do many useful things.”


Tianlan also provided an interesting biological perspective: “We don’t need Einstein level of intelligence. Because I really like visiting zoos… we can find animals that don’t know any idea about language doing perfect manipulation. Like a squirrel monkey, this big, probably with a 30 gram brain can do perfect manipulation.”


The panelists also addressed the gap between public perception and reality regarding robot autonomy. Loosararian revealed an uncomfortable truth about current home robotics expectations: “Sure, if you want to pay $40,000 for that and you want someone to be able to see what’s going on in your house. Maybe we’re getting out of the shower and you know, there’s someone teleoperating your robot.” This comment highlighted the reality that many impressive robot demonstrations still rely heavily on human teleoperation rather than true autonomy, and raised important privacy concerns about home robot deployment.


Rus identified additional challenges including the “long tail” of unstructured environment situations that robots haven’t been tuned for, and the significant gap between research lab demonstrations and truly scaled deployment solutions. She noted that a research lab robot for folding laundry “might cost you half a million dollars,” illustrating the cost barriers that remain for many applications.


Deployment Strategies and Safety Considerations

The panelists showed strong agreement on the need for gradual, controlled deployment strategies rather than attempting revolutionary breakthroughs in general-purpose robotics. All three advocated for focusing on specific applications in controlled environments before attempting more complex human-interactive scenarios.


Loosararian introduced the concept of “droid over humanoid” applications, arguing that specialized robots designed for specific tasks offer more sustainable competitive advantages than attempting to build general-purpose humanoid robots. This approach aligns with current successful deployments in industrial settings.


Rus highlighted existing successful applications: “There are so many extraordinary applications and deployments that we already have in the world today… our company Venti Technologies is automating port operations… Symbotic automates storage systems, and they move millions of boxes every day.” These examples demonstrated that robotics is already delivering significant economic value in controlled industrial environments.


Tianlan emphasized the importance of establishing clear safety boundaries and rules, comparing robots to other tools: “Clear boundaries and rules needed similar to cars or chainsaws for safe deployment.” This perspective treats robots as sophisticated tools requiring appropriate safety protocols rather than fundamentally different entities requiring entirely new regulatory frameworks.


Business Models and Economic Considerations

The discussion revealed interesting differences in business model approaches. Loosararian advocated for a service-based approach: “Business model shift from selling robots to deploying and operating them for customers.” This model allows companies to maintain control over their robots, continue learning from deployment environments, and provide ongoing value through data collection and analysis.


In contrast, Tianlan’s approach involves direct sales and delivery of robots to customers, as evidenced by his company’s delivery model. This difference in business philosophy reflects broader strategic questions about how robotics companies can maintain competitive advantages and continue improving their systems after deployment.


The panelists acknowledged that cost remains a significant barrier for many applications, constraining current deployments to high-value industrial uses where the return on investment justifies the expense.


Human-Robot Interaction and Learning

The panelists demonstrated strong agreement on the importance of natural human-robot interaction, though they approached this challenge from different angles. Both Rus and Tianlan agreed that demonstration-based learning represents the most intuitive way for humans to teach robots new tasks.


Rus emphasized that “machines should adapt to humans rather than humans adapting to machines,” requiring advances in perception, situation awareness, and activity recognition. This philosophy places the burden of adaptation on the technology rather than expecting users to learn complex interfaces.


Tianlan focused on demonstration-based learning as the primary method for teaching robots, arguing that this approach leverages humans’ natural teaching instincts.


Both speakers agreed on the importance of robots being able to detect abnormal situations and call for human intervention. Tianlan described this as a form of consciousness: “Robots need ability to detect abnormal situations and call for human intervention.” Rus defined consciousness in practical terms as “environmental awareness and appropriate response capability.”


Future Timeline and Expectations

The panelists showed both agreement and disagreement regarding future timelines for robot capabilities. All expressed optimism about near-term applications in structured environments, but they differed on more ambitious goals.


Rus maintained a cautious view about humanoid robots, stating that “humanoid home assistants remain far away despite advances in other areas.” When the moderator posed a hypothetical question about what they would ask Elon Musk regarding his ambitious robotics timelines, Rus expressed skepticism about overly optimistic projections.


Tianlan expressed more aggressive optimism, stating that “the hardest thing are behind us” and envisioning graduated autonomy with various robot forms performing specific tasks reliably in the near future.


Loosararian focused on the transformation of industry prospects, noting the dramatic shift from pessimism to excitement over the past decade. However, he emphasized the need for realistic expectations about current capabilities and the importance of teleoperation as a stepping stone to full autonomy.


Technical Solutions for Remote Operations

When addressing audience questions about robotics applications in remote areas, Rus provided specific technical recommendations. She suggested that “state-space models” and “Liquid AI open-source models” could be particularly valuable for remote infrastructure management applications, offering concrete technical pathways for addressing challenging deployment scenarios.


Audience Engagement and Broader Applications

The discussion included several audience questions that highlighted the broad interest in robotics applications. Questions came from professionals working in drone automation for solar industry infrastructure, physics researchers interested in robot consciousness, and human-robot interaction specialists. These inquiries demonstrated the wide range of applications and concerns surrounding robotics development, from practical industrial applications to fundamental questions about artificial consciousness and social integration.


Key Areas of Consensus and Disagreement

The panelists achieved remarkable consensus on several fundamental points. All agreed that physical AI represents a transformative breakthrough, that manipulation remains the primary technical challenge, that industrial applications in controlled environments offer the most viable near-term opportunities, and that demonstration-based learning provides the most natural approach to human-robot interaction.


However, significant disagreements emerged around deployment timelines, with Rus maintaining caution about humanoid robot timelines while Tianlan expressed strong optimism about rapid progress. The panelists also differed in their business model approaches, with Loosararian favoring service-based deployment and Tianlan focusing on direct sales and delivery.


Perhaps most significantly, they disagreed about the requirements for robot dexterity, with Rus emphasizing the need for advanced manipulation capabilities while Tianlan argued that limited dexterity could still enable many useful applications.


Implications and Future Directions

The discussion revealed a field in transition, moving from experimental research to practical deployment while grappling with significant technical and social challenges. The panelists’ honest assessment of current limitations, combined with their optimism about future progress, suggests a maturing industry with realistic expectations about both opportunities and constraints.


The emphasis on gradual deployment in controlled environments, combined with the focus on specific applications rather than general-purpose robots, indicates a pragmatic approach to bringing robotics technology to market. This strategy may prove more sustainable than attempting revolutionary breakthroughs in general-purpose humanoid robots.


The discussion also highlighted the importance of addressing privacy, safety, and social concerns as robots become more prevalent. The revelation about teleoperation requirements and associated privacy implications raises important questions about the true nature of robot autonomy that will need to be addressed as the technology advances.


Conclusion

This panel discussion provided a nuanced view of robotics development, balancing genuine excitement about recent progress with honest acknowledgment of current limitations. The panelists’ agreement on fundamental technical challenges and deployment strategies, combined with their different perspectives on timelines and business models, reflects a field that has achieved clarity about core problems while maintaining healthy debate about solutions and implementation approaches.


The conversation successfully moved beyond futuristic speculation to focus on practical realities, current applications, and realistic pathways for continued development. This grounded approach, combined with the panelists’ demonstrated expertise and candor about challenges, provides valuable insights into the actual state of robotics technology and its likely evolution in the coming years.


The discussion ultimately suggests that while the robotics revolution may not unfold exactly as popular media portrays it, the fundamental infrastructure for widespread robot deployment is rapidly advancing, making the panelists’ cautious optimism about the field’s future prospects well-founded.


Session transcript

Jamie Heller

Hello. I’m Jamie Heller. I’m the editor-in-chief of Business Insider.

And we’re here today for a panel called Living Autonomously. It’s about robotics, robots, and living with them. And we have three terrific panelists who are in the thick of it.

Here is Jake Loosararian, co-founder and chief executive officer of Gecko Robotics. We have Daniela Rus, who’s the director of Computer Science and Artificial Intelligence Laboratory at MIT. And we have Shao Tianlan, who’s founder and chief executive officer of MechMind Robotics.

So thank you all for being here. Thank you. And let’s just get started.

So many advancements on robotics in the last many years, in the last few years. I’d like to each of you please tell me what you think is just the most exciting thing in the very recent last 12 months. Jake, why don’t we start with you?

Jake Loosararian

Yeah, the most exciting thing for us has been a hyper-focus from the actual users of these robots and the AI models that help to power the robots’ effectiveness and what to do with the information and data.

So it’s been a hyper-focus on, OK, AI, AI, AI, what’s the ROI of AI? And that actually leads you down a path of interrogating the information and data sets that feed the model, which then allows for you to interrogate, do we actually have the information and data sets that we need to drive the ROI for all the investments that are being made right now into artificial intelligence?

And so that actually leads you down a path to more and more of a physical realm. And so this is why physical AI has been a topic that you’ve just heard so much about. And so for us at Gecko, what we focus on is gathering information and data sets from a bunch of different robots that we make, and then also that we integrate with.

and pulling all that data set into one source of truth called cantilever that helps make different kinds of decisions using information data sets that never existed before. But this is a trend that’s occurred because AI has honed this question of what sorts of decisions can I make or can I not make and what’s missing for me to be able to take advantage and have lots of ROI from the promise that we’ve all been hearing about with artificial intelligence.

Jamie Heller

And is the data that helps robots think and behave and or the data that robots can gather to help companies?

Jake Loosararian

It’s been both, but it’s actually been the data sets that the customers never had before. So in our case, it’s diagnosing the health of the built world is what we call it. So it’s how healthy is your bridge or your power plants or your ship or whatever the physical structure is you’re building.

And then using that information data to help to drive optimizations, longevity of the asset, predicting failures, those sorts of data sets. And then we’ll use the information data sets that our robots are collecting while they’re gathering that data that’s driving billions of dollars of returns for customers to actually make the robots smarter, make the robots be able to live more autonomously.

And that creates a very interesting possibility for foundation models in these environments that you typically just don’t get information from. So we start with the data that can drive ROI and then how that can make our robotic systems more powerful than anyone else’s.

Jamie Heller

Got it. Daniela, what’s been most exciting for you in your lab?

Daniela Rus

Very hard to choose one thing, but I will say we are focused on expanding robot capabilities. And that means expanding both the robot body and the robot brain because the body is important. The robot can only do what its body is going to be able to do.

And then for that body to do things, it needs to have a good brain. And so, right now, we have new advances in designing or co-designing robot bodies and brains together according to task specifications using AI. And so that means that in a few hours, you can actually get a custom robot that could then be manufactured at industrial grade in a few days.

And this solves a very important problem, which is that right now, we have certain fixed architecture robots and we kind of have to adapt the tasks that we assign the robots to the architecture. Now, we can get the robot architecture to be customized and adapted to the task. And along these lines, I would say we’re working with new materials.

We’re working with soft materials. We’re working with some AI design materials for the body part. And for the brain part, we are developing physical AI, which is a different kind of AI than the large language models you all are using today.

It’s an AI solution that has embedded in it an understanding of the physics of the world, which large language models do not. And also an AI solution that provides other properties. It allows the robot to learn the task rather than a task in context.

When you learn a task in context with much of today’s solutions, that means that for every new context, you have to redo the training. If you learn the skill, which is how humans learn, then you can apply that skill in many different contexts. And we also have more adaptive solutions.

So in other words, today’s models are frozen after training. That means when you deploy them in the wild, they cannot continue to learn. But with our physical AI solutions, we can get AI solutions that continue to adapt after training based on the inputs that they see.

So this is extraordinary. It’s opening so many opportunities.

Jamie Heller

And is it something like we’re just at the cusp of it, or is it really starting to happen? Where are we on the arc of this science?

Daniela Rus

Well, it’s happening. In fact, in my lab, we have been studying these questions for many years now. And our company, Liquid AI, is already bringing these physical AI models to business.

And it’s providing small, compact models that use a technology we call liquid networks that run on device. That means you don’t need to do the cloud calls. So, that means energy efficiency, it means privacy, and it also means capability without

Jamie Heller

the risk of latency. Wow. Okay.

So, Tianlan, let’s hear from you. How fast are things going and what’s most exciting in your company right now?

Shao Tianlan

Yeah. Over the last 12 months, we delivered more than 10,000 intelligent robots. So, that number is more than the first eight years of our company combined.

So, I see a very clear trend of acceleration of adoption of intelligent robots. So, physical AI is turning from future vision or just some concept to a real world, a very helpful product. So, I think it’s always very difficult to start.

So, the first 10,000 units actually took us eight years. But the second 10,000 units took us only one year. So, I see a very clear trend.

And in technology, last year, we found out that something we thought was very, very difficult now becomes reachable because of many reasons, like improvements in infrastructure, including a simulator, more available real-world data, and advancement in AI models.

So, I’m very, very optimistic and confident that physical AI, and not just humanoid, but basically I empowering all kinds of robots you do a very impactful thing in

Jamie Heller

several hundred days. Can you when you say you did something that very that seemed very difficult and you got it done did this have to do with the 3d vision can you just it might be a little complex but can you share? Yes say okay

Shao Tianlan

yes say we want to we want a robot to grab something from a jar yes they open drawer as take a look at what’s inside and bring up yeah so this thing we human get that ability probably at three yeah but that’s a quite difficult yeah so but now so we can train this a so-called world model like thing yeah so aligning every everything including vision including robot vision robot motion aligning everything say in one specific space and controlling say as a tanking robot what to do so that’s something that thing is not so imaginable just a few years ago yeah but but now so I think it’s a vision

Daniela Rus

already. Okay did you want to get in here? I just wanted to add that there are so many extraordinary applications and deployments that we already have in the world today and are bringing value and contribution to people so for instance our company Venti Technologies is automating port operations is moving containers and we have entire fleets of robots that operate 24-7 without without the need of human drivers yet human drivers are also in the loop to step in when the weather is bad or when there is when there is a lot of need for for movement and this is a very interesting way to think about robots as collaborators as as tools that we bring into the work environment that can take on some of the work and in particular ports are really notorious for not having enough drivers ports also in general there is a lack of truck drivers around the world so bringing autonomy bringing robotic concepts to this industry is is important it means that goods move much faster And I also wanted to give the example of the company Symbotic.

Symbotic automates storage systems, and they move boxes. They depalletize and repalletize, and they move millions of boxes every day. And through this operation, they’re lowering the cost of food, they’re speeding how food gets from one place to another.

Jamie Heller

It’s fascinating industrial applications. Let’s get into maybe more where people not working are, whether it’s hospitals or hotels or workers in the field. How safe are they?

What are the problems still facing semi-structured environments as opposed to superstructure environments?

Jake Loosararian

Yeah, I guess I can start. A lot of times, maybe we focus on physical intelligence and robotics, and very specific, you’re doing certain functions, or it’s pretty easy, like they’ve been autonomous vehicles in mining, for example, with vehicles taking ore and material from a mine to refining.

These kind of systems have been around for a long time. I think the big change that’s happening is, because robotic systems are becoming way more intelligent, and I think the right way to think about this is not generalized, but actually more specific, I call it droid over humanoid.

It touches on this really big problem of deployment, and I think Andreessen Horowitz actually came out with a really good article about deployment is the big problem right now for robotics in terms of the ability for it to begin to make really large impacts, and for there to be a clear road map.

I think that’s a big problem. Typically, robotics companies will make a single product and want to make a bunch of that single product, and then they end up creating a path towards… Basically, they create a potential path towards just, like, you lose your unfair advantage.

It just becomes commoditized. So the key is actually, how do you know what’s the next sorts of robotic systems to make on that droid that leads you into the more generalized humanoid? And I think this is where, like, very specific, and we’ve talked about this, like, more specific robotics that feed into systems that help companies, whether it’s, like, make kilowatts or make barrels per day or, you know, basically oriented towards, maybe it’s, like, moving shipping containers faster, or for Amazon, it was, like, how do I get two-day shipping?

Robotics that can provide information and data sets that feed into helping a customer and a worker be able to accomplish the main job of that business outcome. That perpetuates this, like, cycle of, okay, if I had a robot and data set that could do this or get this or get that, it creates a really good roadmap for making more and more robotics. So for our company, we actually don’t sell robots.

We make our robots, and we deploy them into the environments for our customers. We figure out how to make them smarter, how to make them better and collect better information and data set without locking in a type of robotic platform and then having to, you know, continue to upgrade that as systems and hardware get smarter.

So I think that’s, you know, that’s a really important, I think, paradigm shift for most roboticists is you have to learn about the environment, so you have to forward deploy and build your robots as close to the environment as possible, and that gives you the information and data set that doesn’t exist anywhere on the Internet or anywhere on YouTube about what these environments are like, and so that’s the key.

Daniela Rus

Mauricio, so I would just agree and add to that that there are some real technical challenges that have to do with if you’re in an unstructured environment, there’s a long tail of situations that you haven’t tuned the robot for, and so learning about what the long tail is and learning how to deal with each of those situations is a challenge.

From a technical point of view, perception, the ability of the robots to correctly understand its work, its world, is a challenge. We’re pretty good at getting robots to move in the world. We still have challenges with manipulation.

with Handling the World. And in order to make progress there, we need better sensors. Our robots don’t have the analogous of skin-like sensing.

And so there are plenty of challenges, of technical challenges. And yet, if we look at what the machines can do today, we can come up with a wide range of applications where technology is suitable. So an important thing to keep in mind when deploying and developing robotic solutions is to ask, what does the application need?

And is the technology a good fit for that application? Or are there situations where the application needs, but the technology is not ready for?

Jamie Heller

So Tianlan, when you put these 10,000 products out into the world, what’s your biggest concern of the risk? How could it go wrong? What could actually hurt your customer?

And how do you protect against that?

Shao Tianlan

So sometimes I like to compare intelligent robots with more familiar tools we use in daily life, like cars or chainsaws. So we have very clear rules about where should we use them, say, who should be able to use that. And what do we do if something goes wrong?

When it fails, how do we minimize the consequences? Of course, the cars or chainsaws or electric dreamers can also cause others. But that doesn’t prevent them from being very useful.

So I would say we need the same thing, clear boundary, very well definitions, and the rules. So for example, in factory floor, in logistic centers, our systems are already helping workers there, moving around curtains, moving around sets, and also doing many assembly, welding, screwdriving, many of these things.

So the safety standard. So it’s very clear who should be able to touch it, yeah, and such application doesn’t involve direct human interaction Yeah, it’s not like helping the elderly or say raising up say a child Yeah, so you’re saying you’re you you’re not doing that currently not so currently not yeah so actually we don’t have to wait until humanoid are working among us and interacted directly with all the human being before we deploy say hundreds of millions useful robot, especially in manufacturing, in logistics and some service industry.

If we think about physical work carried out today by the human, the vast majority of them actually happen in a relatively controllable environment. Yeah, the definition of the task is quite clear and doesn’t involve direct interaction of this human. Yeah, so that’s the tasks I think in the next few hundreds of days, yeah, say robots can be very helpful

Jamie Heller

I mean, I do think I want to just hold the point is just that I think there is a big debate Oh, is AI just another tool? Let’s just or is AI just like super powerful on another level and it’s not just like oh a car like we have to be extra careful and how much power are we giving these robots? So it’s just something to think about but go ahead Jacob.

Jake Loosararian

Yeah, well, I think I think one thing I’d love to get to as well is like what’s the role of teleoperation and how should the average person that is being told by these tech leaders potentially on this on this panel here too that like robot revolutions are coming in three years and they’re gonna have robots folding your laundry and doing your dishes etc, etc, etc And the dirty little secret, of course is like, you know Sure, if you want to pay $40,000 for that and you want someone to be able to see what’s going on in your house Maybe we’re getting out of the shower and you know, there’s someone teleoperating your robot So I think I think there’s like an importance like there’s a lot of there is autonomy for certain tasks But for the majority of the case for humanoids, it’s learning in the environment and it has to do that with teleoperation And there’s a funny little video.

It’s teleoperation. Just okay So basically it’s someone with a headset that’s like helping to maneuver like when you see these videos online It’s there’s someone operating the robots for the most part and there’s even a funny video where someone takes the headset off It’s like somewhere, you know behind a closed door and then the the robot, you know Yeah, it’s like it like goes like that and it like falls over You know, I think it’s just like important the average individual that’s like thinking about living autonomously and robots among us It’s like they don’t really realize You know how Teleoperation is a prerequisite for full autonomy because we’re still learning the environment And I also think it also should should begin to ask the question okay, if you if you have to understand the environment that the robots are being deployed in that actually should that actually should allow for roboticists to begin to think of themselves not just as how can I get information about how the robot’s performing, but how can I get information about what sorts of actions are being taken at the customer.

So, you know, how do I make steel, for example, should be a thing that roboticists begin to think more about. You know, just when I’m thinking about, like, yeah, like, you have to understand the environment and why am I turning this valve because of this alarm, or, you know, like, I’m thinking more industrial applications, but, of course, this applies to, you know, the more normal things.

But, you know, as robots begin to understand and learn the environment that they’re in, they’re also learning and understanding the things that humans, that subject matter expertise, by which a lot of them are retiring in these sectors that I’m referring to, what sort of actions they take and why.

And that actually is very important for roboticists to understand. There’s a lot of power in understanding the physical world and creating the brain, the world model, the foundational model.

Daniela Rus

And I think it should drive industries, actually. The other thing I would add to that is that there is a real gap between getting a robot to do a task in a research setting. And so I can give you a robot that will fold your laundry and load your dishwasher, but it might cost you half a million dollars.

Right. So then to get from that to a deployment where you have a few tens of robots that operate in some specified settings and to really fully scaling up the solutions. And so it takes a lot of hard work to go from research lab to physical demonstration and to a truly scaled up solution, especially when it comes to manipulating the world.

Jamie Heller

So talking about the manipulation, again, you’re the experts, not I, but coming to Davos for years, you see that little dog robot walking on the street. And that doesn’t have a brain. Isn’t the breakthrough here that we’re Building robots that are going to be able to think and reason more on their own.

Well exactly so we are going to

Daniela Rus

so we need robots that look at the world and understand that there is a step here and the surface might be slippery and Could adapt in situ that requires that the robots have a brain That has the capability to perceive the the world and to very quickly adjust its Emotions in order to respond to the world and so we’re beginning to develop the brains of robots that can do this kind of exceptional navigation in fact the there are some exceptional Examples of robots that can hike a trail and so if you think about how complicated it is to hike a trail you have to Look at where you put your step.

How do you balance the rest of your body? And so these kinds of mobility Advances are already happening What’s difficult for manipulation is that we don’t actually have skin like sensors that are able to tell us The kind of detailed information that my fingers get when I I rotate this glass I don’t even look at it, but I feel it.

I sense forces and torques. I sense them really fast and with a high degree of accuracy and And then I can I can do very dexterous Tasks, so we’re lacking the sensors that give us this level of dexterity for robot hands We can do other things so we can take the Glass with robots and put it from here to there and this is enabling all the warehousing and logistics progress we have seen

Jamie Heller

I’m gonna come to questions in just a minute, but now just today. It’s that Elon Musk is going to be here. I’m basically Robotics are a huge part of Tesla if you were each able to ask him one question.

What would you ask him I?

Jake Loosararian

Would ask him when can I buy all your robots? like yeah I think I think for well can I explain that for just a second okay so what I what I mean by that is all the things we’re talking about there’s like acceleration of learning that’s going on and the infrastructure that’s coming I think we’re underestimating the impact that has in terms of the compute and the energy by the way I don’t think we’re modeling those inappropriately um however or um other sorts of rare rare rare materials rare earth materials but I think that um it’s just super important to understand that you know all the things we’re talking about with that that’s the really exciting problems to overcome with robotics teleoperation etc etc etc there is incredible importance um to figuring out how to get the roi from the robot in the system that can get that and then like the one sort of quickest set the adoption and deployment um earlier than you probably should you know you should expect you don’t need like a million miles you know you don’t need to um the you you can get away with um not perfect um actions by robots um if you do it in specific tasks so yeah so I think there’s a from a labor perspective from a demand um of um um whether it’s energy or or materials um robotics and humanoids in particular are going to be game changing and it’s going to completely upend um you know the the um the economic landscape um for companies um you know and then uh and then also the production and how effective and efficient and how much growth we have as nations and so your first question be how many sign up yeah no yeah how many I want to be I want to I want to buy the most robots um than any other company just have okay well I I would just um

Daniela Rus

I would just say that yes uh I might uh debate your point uh because um robots are actually the humanoids are not actually capable of delivering their full potential and so even though um uh Elon Musk said that by next year he will have or by this year he will have several million robots in factories uh I would say well what what is the plan what is the technical plan for the manipulation for the hands of the robot.

And I will also remark that, well, Elon Musk also told us in 2017 that we will be falling asleep at the wheel in 2019. And we’re still not falling asleep at the wheel. So what I might say is that I actually love the vision, but I think he sometimes gets the timing wrong.

Shao Tianlan

I’d like to comment a little bit about the robot hand. I personally have tried to live with only three fingers and only do this action. I personally have tried that for several hours.

Life is just fine. Yeah, of course I fail to use chopsticks, but I can use spoon and knife instead. Yeah, so seriously, you can also try.

Actually, I encourage that. Yeah, I mean, so this, I think, justifies my opinion that we don’t need, say, human level dexterity to enable robot to do many useful things. Yeah, so you can try that at home.

It’s not that dangerous. So that’s why, say, I’m depicting a so-called graduated autonomy and intelligent robot. It’s not like, say, suddenly we got a robot that can do every action a human can carry out.

It’s like, OK, robot of all kinds, maybe with three fingers or, say, with one arm, or, say, one arm on a dog, who knows? Yeah, so all kinds of robot in certain applications performing certain tasks very reliably. Yeah, so that’s the near future.

I mean, future in several hundred days. So I’m visioning. Yeah, so, yeah, of course, I also share the further future vision.

Yeah, but I think we don’t need to wait, say, until the whole humanoid or robot hand to be very, very advanced. I’d like to comment to that robot dog thing. There are three major directions in robot abilities.

Yeah, say, navigation, locomotion, and manipulation. Yeah, locomotion.

Jamie Heller

Navigation, locomotion, and?

Shao Tianlan

Manipulation. Manipulation. Locomotion is basically about dancing, a parkour thing.

Yeah, navigation is about working around, find out. where I am and how to go to somewhere else. And manipulation is what Professor Rus has just mentioned, to move around things, to do operations.

So currently, what’s missing, what prevents robots to be super useful in our daily life, our manufacturing, our logistics, is mainly about manipulation. But we see very concrete and fast advancement in recent days.

Daniela Rus

So you just gave me an idea. For all of you to understand why manipulation is difficult, just try manipulating your phone with your fingernails, not with your skin. And so to see how difficult that is.

So robots are made primarily from hard plastics and metal. And those materials are more like fingernails than the skin and flesh we have on our fingertips. So on the research side, we are advancing new materials that will give us skin-like interaction.

But then we have to take that from the research lab to the demonstration and scaling it up. So it’s coming, but it’s not today.

Shao Tianlan

Yeah, but the good news is that we don’t need Einstein level of intelligence. Because I really like visiting zoos. When I was in Munich, I was in the zoo of Munich every maybe two months.

So we can find animals that don’t know any idea about language doing perfect manipulation. Like a squirrel monkey, this big, probably with a 30 gram brain can do perfect manipulation. So it’s not that far away from us, I would say.

Jamie Heller

Just listening to you all, though, it is the human being doing what you do must find that the human being is remarkable. Like all the different ingredients, the brain, the skin. Okay, questions?

Right here in the front, just if you could stand and say your name, where you’re from.

Audience

Sure, I’m Vanessa Mendez.

Jamie Heller

I think they have a mic for you.

Audience

Yeah, thank you. Thank you for the session, I think it was very insightful. I come from Houston, Texas as well.

I’m also representing the global sharecropper community. I’m currently running a startup that focuses on drone automation and visual AI for the solar industry and how we can improve asset integration for solar assets, so very close to what you’re doing. And you know, you pose an interesting aspect, which is some of the sites, most of the sites are in remote areas, meaning the labor thing, it’s actually a big problem.

Thank you. So, as I was saying, some of these sites are in remote areas, so the cost of deploying these projects is higher than what it could be with automation. So my question to you is, you know, given the current autonomous state in terms of drones and how capable they are, we see Skydio as a big example, how close are we from seeing broader automation in other sectors of infrastructure management in remote areas?

Thank you.

Jake Loosararian

Yeah, maybe I’ll interpret that question as like, in the future, you want to be able to have a path towards, especially like very remote assets, being able to run them more autonomously and reduce the amount of one’s safety issues and then also increase the margin for basically operating these, especially in environments where there’s not as much skilled labor force.

Well, I do think, I think if we get the model right of being able to move towards automation by small steps, and one case could be these autonomous drones, then I think it’s actually like, you know, for maybe less complex environments like a solar field or a wind turbine, like you could begin to get that in the next like three, four years.

The problem is, you know… Remote control, basically? Yeah, just like mostly autonomous or completely autonomous.

But I think the question is just, it comes down to just like… is what’s the bang for buck, I guess, is like it’s really consequential in certain industries to be able to have more tasks, be autonomous. I think it’s gonna take more than three years to get those sorts of tasks, like whether it’s at a shipyard, like having to do a turnaround of an aircraft carrier or something like that.

So I think most of the robotics, most of the software and AIs to be focused on the highest ROI applications. And so that’s gonna take probably another five, six, seven years, as long as companies shift in a big way, their thesis on getting there.

Daniela Rus

Okay, so in remote environments, you need a really good perception system and you might not have GPS. So you can solve that problem if you adopt on-device physical AI models, because those models will allow you to adapt, it will allow you to do the navigation, it will allow you to respond to whatever the environment looks like.

And the good news is that we are beginning to develop these types of models. They’re called state-space models and check out the Liquid AI open-source models, which might be very effective for your problem.

Jamie Heller

Could you just repeat that? What is the name of the models?

Daniela Rus

Liquid AI.

Jamie Heller

Liquid AI, what kind of models did you say?

Daniela Rus

State-space models.

Jamie Heller

State-space, mm-hmm.

Daniela Rus

This means that they are built on an underlying mathematics that captures the physical world through differential equations. So it’s a different approach to building the model than the transformer model.

Jamie Heller

Okay, another question here. First hand I saw here, thank you. Let us know who you are.

Audience

Well, thank you for your interesting discussion. My name is Miguel, I come from Spain. I’m a physics PhD student, but I’m really interested, sorry.

In the brain, in the robotic brain thing and this how this thing that you’ve been discussing about reacting to not programming events or to accidents, and I think the key for that is the the conscience.

So my question is which level of conscience are we are we able to reach at the at the current moment and which level of conscience can we reach in the future? Is it are we even is it even possible to reach the human conscience or and how long would it take, of course?

Daniela Rus

Well first of all, we have to define what conscience is and and so I think that we don’t fully understand the mechanics that make humans conscious, so it’s difficult to think about applying that to a machine.

But what we can say is that maybe conscious means being aware of your environment, being able to respond to your environment. And if we adopt this simple definition then at some level we can build machines with certain level of awareness. However, we have to be careful when we build our technical solutions.

We have to be able to characterize them. We have to be able to say when do they work? When do they not work?

What is the uncertainty that we expect from the predictions and recommendations that we get from from the systems? So an exciting topic that would get us to more capable robots is really to build more uncertainty understanding within the robot brain. Because through understanding the uncertainty of your predictions, the machine will be much better at doing tasks.

Shao Tianlan

I’d like to add to that. Actually, we can think about how the human brain works. So many research say that the human brain is super efficient.

Actually, so we are not using our brain that much when things are normal Yeah so you probably have the experience that you listen to music or talking on phone and doing your Daily, say, like laundry or preparing food and without even memory So you cannot sometimes recall, for example, say, how you take the subway or bus to home Yeah, because it’s very normal to you.

But when things become abnormal Yeah, so when Surprising things happen. Yeah, so we start using our brain So I think this ability is super important if we want to use a robot Reliably and safely. Yeah, so robot need to know what is normal and what is not so normal So we can have a fallback strategy.

For example, call a human operator to intervene Yeah, so that’s so-called a kind of consciousness Yeah, so basically say the ability to evaluate risk and to detect Say what is abnormal? Yeah, I mean, isn’t that the whole risk with

Jamie Heller

AI and robotics that was I guess more talked about when chat GPT first came around of like We’re gonna create these robots that are we think we’re gonna have it under control and then it won’t be

Daniela Rus

Well, it is a it is definitely a Challenge and Again, I want to go back to my definition of a robot body and brain we can build robot brains without AI so we can put algorithms in the brains of the robots and Then we will have some limitations over what those robots can do the attraction of bringing data and physical AI models to the brain is That we give the robots the ability to be more adaptive And we give the robots the ability to do tasks that cannot be modeled from first principles I can model this task from first principles I don’t need an AI solution to tell me how to pick something from here to here.

But if I want to I don’t know, what’s your favorite task in the kitchen? If I want to…

Jamie Heller

Lee’s favorite, do the dishes.

Daniela Rus

If I want to do the dishes, it’s so difficult to model all your body movements from first principles, which is why collecting data from humans and teaching robots how to do those tasks in a human-like fashion is important.

But that requires that this data gets passed through systems that understand more than statistical correlations that can really do the spatial-temporal correlations that are involved in doing the dishes.

Jamie Heller

I think we have time for one, two more questions, we have one in the front, yes.

Audience

Hello, my name is Giulia, I’m from Italy, I’m also a global shaper and I am a researcher in human-robot interaction. I have so many questions actually, but I’ll choose my favorite one, which is about human. I was wondering what was your opinion about what makes human-human interaction natural?

And what are the biggest challenges, in your opinion, to make robots capable of interacting naturally with us?

Shao Tianlan

I would say demonstrations. So basically, that’s how we humans learn most of the things. So we humans have several different intelligences, we basically learn the ability of grasping something or throwing or dribbling a ball by watching and practicing.

But we cannot learn that to be able to calculate 25 times 52, so that kind of intelligence is another thing. So if we want to deploy a robot, for example, on a factory floor or logistics, I would say demonstration is the most intuitive way to tell a robot what to do. And that’s exactly how we humans tell others, for example, to do work in the kitchen.

Daniela Rus

The vast majority of the machines that we have built so far are such that we adapt to them rather than the other way around. So, an important question here is can we build machines that adapt to humans? That means that those machines have to be really good at perception, at situation awareness.

They have to be able to do activity recognition and not just on the visual side. They have to understand, oh, you know, I’m trying to move a box and I’m struggling, it’s too heavy. Can it come in just like a teammate to help me pick up the box?

And then once we’re both holding the box, carrying a big box, it requires that the machine understands the forces and torques that I put on the box to respond in the same way. And also gives me feedback to understand that it’s doing the right thing and it’s working as a teammate rather than as an inert tool.

Jamie Heller

So, there’s been this thing on Instagram, what it was like in 2016. If you could go back to 2016, 10 years ago, would you think where we are today on robotics is farther than you predicted we would have come or not as far as we predicted we would have come? Like, where are we in terms of our aspirations versus our results in the last 10 years?

If you went back 10 years.

Jake Loosararian

10 years. 10 years ago was when I went through Y Combinator in Silicon Valley after bootstrapping Gecko for three years. When I went to Y Combinator, if you were doing a robotics company, it was basically the same as choosing to die a slow death or a fast death.

And especially robots that throw a billion dollars at a research project that ends up dying. So, I think it was pretty bleak 10 years ago, actually. And we thought I think we I think when I was growing up I was expecting maybe like some of you were a world where robots were among us and helping us and you know It was a it was a beautiful symbiotic relationship.

We’re helping each other and that reality was not coming at all and So I am actually exceptionally excited about where we are ten years later

Jamie Heller

Good, okay, just a little couple couple more minutes So just do you feel like we are we’re farther or not as far along as you in some ways we’re further in other ways

Daniela Rus

We’re not so ten years ago. We were not talking about AI. We’re not talking about physical AI Self-driving projects looked like science projects and now we have companies that have deployed Mobility and autonomous mobility in so many settings.

We have robots that deliver On sidewalks we have self-driving cars in geofence environments. We have robots that move containers and Packages we so we have a lot of machines But we have not made as much progress on the manipulation side so this vision of the of Rosie the robot assistant in your home that will do all of your chores Remains some way away.

However, I think I can give you a self-driving Garbage can if if you don’t like to take your garbage out that is possible But the more general humanoid in-home robot will be a long way away And I I’m also very excited about where we are because we made huge advances on the material side on the hardware side motors and sensors on the computational side on the data side and the AI side and so we’re waiting for the young generation to take all this wealth of capabilities and turn it into A magical future where machines will support us in the future young generation.

You got 22 seconds

Shao Tianlan

Oh, yeah So I would say which is one sentence say the hardest thing are behind us Looking for artists is behind hardy thing is already behind us. So I’m very confident

Jamie Heller

That’s great. Thank you all so much, Jim. Thank you all.

J

Jake Loosararian

Speech speed

184 words per minute

Speech length

2171 words

Speech time

705 seconds

Physical AI emergence driven by ROI focus on data collection and decision-making capabilities

Explanation

Loosararian argues that the focus on AI’s return on investment has led to interrogating data sets that feed AI models, which reveals gaps in available information and drives the need for physical AI. This creates a cycle where companies need robots to gather previously non-existent data sets from physical environments to make AI investments worthwhile.


Evidence

Gecko’s platform called ‘cantilever’ that pulls data from various robots into one source of truth for decision-making using information that never existed before


Major discussion point

Recent Advances and Breakthroughs in Robotics


Topics

Economic | Infrastructure


Agreed with

– Daniela Rus
– Shao Tianlan

Agreed on

Physical AI and data-driven approaches are transforming robotics capabilities


Need for teleoperation as prerequisite for full autonomy while robots learn environments

Explanation

Loosararian explains that most humanoid robots currently require human operators with headsets to control them remotely, especially when learning new environments. He emphasizes this is a necessary step toward full autonomy that the public doesn’t fully understand.


Evidence

Reference to videos showing robots falling over when teleoperators remove their headsets, and the reality that someone needs to see into homes for domestic robot applications


Major discussion point

Technical Challenges and Limitations


Topics

Infrastructure | Legal and regulatory


Focus on specific ‘droid over humanoid’ applications rather than generalized robots

Explanation

Loosararian advocates for developing specific robotic solutions that solve particular business problems rather than trying to create generalized humanoid robots. This approach prevents commoditization and creates a clear roadmap for advancement.


Evidence

Examples of autonomous vehicles in mining, Amazon’s two-day shipping optimization, and applications oriented toward specific business outcomes like making kilowatts or barrels per day


Major discussion point

Deployment and Safety Considerations


Topics

Economic | Infrastructure


Agreed with

– Daniela Rus
– Shao Tianlan

Agreed on

Industrial and controlled environment applications are currently most viable


Disagreed with

– Daniela Rus

Disagreed on

Timeline and feasibility of humanoid robot deployment


Forward deployment in actual environments necessary to gather data that doesn’t exist online

Explanation

Loosararian argues that robotics companies must deploy their robots directly in customer environments rather than selling them as products. This approach allows companies to learn about environments and gather data that isn’t available on the internet or YouTube.


Evidence

Gecko’s business model of not selling robots but deploying them for customers and learning how to make them smarter in real environments


Major discussion point

Deployment and Safety Considerations


Topics

Economic | Infrastructure


Business model shift from selling robots to deploying and operating them for customers

Explanation

Loosararian describes how his company doesn’t sell robots but instead deploys and operates them for customers. This model allows for continuous improvement and avoids being locked into specific robotic platforms that become obsolete.


Evidence

Gecko’s approach of making and deploying robots in customer environments while figuring out how to make them smarter without locking into specific platforms


Major discussion point

Current Applications and Market Reality


Topics

Economic | Development


Disagreed with

– Shao Tianlan

Disagreed on

Approach to robot deployment and business models


Robotics industry transformation from bleak prospects 10 years ago to current excitement

Explanation

Loosararian reflects that 10 years ago, starting a robotics company was considered choosing a slow or fast death, with billion-dollar research projects typically failing. He contrasts this with current excitement about the field’s prospects.


Evidence

His experience going through Y Combinator 10 years ago when robotics companies were viewed as doomed ventures


Major discussion point

Future Vision and Timeline Expectations


Topics

Economic | Development


D

Daniela Rus

Speech speed

156 words per minute

Speech length

2403 words

Speech time

921 seconds

Co-designing robot bodies and brains together using AI for custom robots in hours

Explanation

Rus explains that her lab has developed methods to design robot bodies and brains simultaneously according to task specifications using AI. This allows for custom robots to be designed in hours and manufactured at industrial grade in days, solving the problem of adapting tasks to fixed robot architectures.


Evidence

Development of physical AI with embedded physics understanding, AI solutions that learn skills rather than tasks in context, and adaptive solutions that continue learning after deployment


Major discussion point

Recent Advances and Breakthroughs in Robotics


Topics

Infrastructure | Economic


Agreed with

– Jake Loosararian
– Shao Tianlan

Agreed on

Physical AI and data-driven approaches are transforming robotics capabilities


Manipulation remains the biggest challenge due to lack of skin-like sensors for dexterous tasks

Explanation

Rus identifies manipulation as the most difficult challenge in robotics because current robots lack sensors equivalent to human skin that provide detailed force and torque feedback. While robots can move well in the world, handling objects with dexterity remains problematic.


Evidence

Comparison of robot materials (hard plastics and metal) to fingernails rather than human skin and flesh, and the analogy of trying to manipulate a phone using only fingernails


Major discussion point

Technical Challenges and Limitations


Topics

Infrastructure | Development


Agreed with

– Shao Tianlan

Agreed on

Manipulation remains the most significant technical challenge in robotics


Disagreed with

– Shao Tianlan

Disagreed on

Requirements for robot dexterity and manipulation capabilities


Long tail of unstructured environment situations that robots haven’t been tuned for

Explanation

Rus explains that unstructured environments present numerous unexpected situations that robots haven’t been specifically programmed to handle. Learning to identify and deal with these edge cases remains a significant technical challenge.


Evidence

Discussion of perception challenges and the need for robots to correctly understand their world in unstructured environments


Major discussion point

Technical Challenges and Limitations


Topics

Infrastructure | Development


Gap between research lab demonstrations and truly scaled deployment solutions

Explanation

Rus acknowledges that while research labs can create robots that perform tasks like folding laundry or loading dishwashers, these solutions may cost hundreds of thousands of dollars. Scaling from research demonstrations to practical deployments requires significant additional work.


Evidence

Example of a robot that can fold laundry and load dishwashers but might cost half a million dollars


Major discussion point

Technical Challenges and Limitations


Topics

Economic | Development


Industrial applications in ports, storage systems, and manufacturing already delivering value

Explanation

Rus highlights that robotics are already successfully deployed in industrial settings, providing concrete benefits like moving containers 24/7, automating storage systems, and reducing food costs. These applications show robots working as collaborators with humans rather than replacements.


Evidence

Venti Technologies automating port operations with fleets operating 24/7, and Symbotic moving millions of boxes daily in storage systems, lowering food costs and speeding distribution


Major discussion point

Current Applications and Market Reality


Topics

Economic | Infrastructure


Agreed with

– Jake Loosararian
– Shao Tianlan

Agreed on

Industrial and controlled environment applications are currently most viable


Cost barriers remain significant with useful robots potentially costing hundreds of thousands

Explanation

Rus points out that while functional robots exist, their costs remain prohibitively high for most applications. The economic barrier is a major factor preventing widespread adoption of robotic solutions.


Evidence

Example of a robot capable of household tasks costing half a million dollars


Major discussion point

Current Applications and Market Reality


Topics

Economic | Development


Humanoid home assistants remain far away despite advances in other areas

Explanation

Rus argues that while significant progress has been made in mobility and industrial applications, the vision of general-purpose humanoid robots for home use is still distant. She can envision specific applications like self-driving garbage cans but not comprehensive home assistants.


Evidence

Comparison between achievable specific applications like self-driving garbage cans versus the unrealistic timeline for general humanoid home robots like Rosie from The Jetsons


Major discussion point

Future Vision and Timeline Expectations


Topics

Economic | Sociocultural


Disagreed with

– Jake Loosararian

Disagreed on

Timeline and feasibility of humanoid robot deployment


Machines should adapt to humans rather than humans adapting to machines

Explanation

Rus argues for a paradigm shift where machines are designed to understand and adapt to human behavior rather than requiring humans to learn how to interact with machines. This requires advanced perception, situation awareness, and the ability to work as teammates.


Evidence

Example of a robot understanding when a human is struggling with a heavy box and automatically helping, requiring understanding of forces, torques, and providing appropriate feedback


Major discussion point

Human-Robot Interaction and Consciousness


Topics

Sociocultural | Human rights


Agreed with

– Shao Tianlan

Agreed on

Demonstration-based learning is the most natural way for human-robot interaction


Consciousness defined as environmental awareness and appropriate response capability

Explanation

Rus provides a practical definition of consciousness for robots as being aware of their environment and able to respond appropriately. She emphasizes the importance of building uncertainty understanding into robot brains to make them more capable.


Evidence

Discussion of the need to characterize when systems work or don’t work and understanding prediction uncertainty


Major discussion point

Human-Robot Interaction and Consciousness


Topics

Sociocultural | Infrastructure


S

Shao Tianlan

Speech speed

139 words per minute

Speech length

1306 words

Speech time

559 seconds

Acceleration in adoption with 10,000 intelligent robots delivered in one year versus eight years for first 10,000

Explanation

Tianlan reports a dramatic acceleration in robot adoption, with his company delivering more robots in the last 12 months than in the previous eight years combined. This demonstrates that physical AI is transitioning from concept to practical, helpful products.


Evidence

Specific numbers: first 10,000 units took 8 years, second 10,000 units took only 1 year


Major discussion point

Recent Advances and Breakthroughs in Robotics


Topics

Economic | Development


Disagreed with

– Jake Loosararian

Disagreed on

Approach to robot deployment and business models


World models enabling complex tasks like grabbing objects from containers that seemed impossible years ago

Explanation

Tianlan explains that advances in AI models, simulators, and real-world data have made previously impossible tasks achievable. Tasks that humans learn at age three, like grabbing objects from containers, can now be trained into robots using world models.


Evidence

Example of training robots to open drawers, look inside, and retrieve objects – abilities humans develop at age three but were technically challenging for robots until recently


Major discussion point

Recent Advances and Breakthroughs in Robotics


Topics

Infrastructure | Development


Agreed with

– Jake Loosararian
– Daniela Rus

Agreed on

Physical AI and data-driven approaches are transforming robotics capabilities


Clear boundaries and rules needed similar to cars or chainsaws for safe deployment

Explanation

Tianlan advocates for establishing clear safety standards and operational boundaries for robots, similar to how society manages other potentially dangerous but useful tools. He emphasizes defining where robots should be used, who can operate them, and how to minimize consequences when failures occur.


Evidence

Comparison to cars and chainsaws as useful tools that can cause harm but have clear usage rules, and examples of current safe applications in factory floors and logistics centers


Major discussion point

Deployment and Safety Considerations


Topics

Legal and regulatory | Cybersecurity


Robots don’t need human-level dexterity to perform many useful tasks effectively

Explanation

Tianlan argues that robots can be highly useful without achieving full human-level capabilities. He suggests that even with limitations like having only three fingers, robots can perform most necessary tasks effectively.


Evidence

Personal experiment of living with only three fingers for several hours, finding life manageable except for chopsticks use, and observation of animals like squirrel monkeys performing complex manipulation with small brains


Major discussion point

Deployment and Safety Considerations


Topics

Development | Infrastructure


Disagreed with

– Daniela Rus

Disagreed on

Requirements for robot dexterity and manipulation capabilities


Focus on controllable environments without direct human interaction for near-term deployment

Explanation

Tianlan emphasizes that current robot deployments should focus on controlled environments like manufacturing and logistics where tasks are clearly defined and don’t require direct human interaction. This approach allows for safe, useful deployment without waiting for more advanced capabilities.


Evidence

Examples of robots currently helping workers by moving containers, doing assembly, welding, and screwdriving in factory floors and logistics centers


Major discussion point

Current Applications and Market Reality


Topics

Economic | Infrastructure


Agreed with

– Jake Loosararian
– Daniela Rus

Agreed on

Industrial and controlled environment applications are currently most viable


Graduated autonomy with various robot forms performing specific tasks reliably in hundreds of days

Explanation

Tianlan envisions a near-term future where various types of robots with different capabilities (three fingers, one arm, etc.) perform specific tasks very reliably rather than waiting for fully capable humanoids. This graduated approach allows for practical deployment in the immediate future.


Evidence

Examples of robots with limited capabilities like three fingers or one arm on a dog performing specific applications reliably


Major discussion point

Future Vision and Timeline Expectations


Topics

Economic | Development


Hardest technical challenges are behind us with rapid progress ahead

Explanation

Tianlan expresses strong confidence that the most difficult technical hurdles in robotics have been overcome, setting the stage for rapid advancement and deployment in the near future.


Major discussion point

Future Vision and Timeline Expectations


Topics

Development | Infrastructure


Robots need ability to detect abnormal situations and call for human intervention

Explanation

Tianlan explains that robots should function like humans do – operating automatically during normal situations but engaging higher-level thinking and calling for help when abnormal situations arise. This ability to evaluate risk and detect anomalies is crucial for safe and reliable operation.


Evidence

Analogy to human behavior of performing routine tasks without conscious thought but engaging full attention when unexpected situations occur


Major discussion point

Human-Robot Interaction and Consciousness


Topics

Cybersecurity | Legal and regulatory


Demonstration-based learning as most intuitive way for humans to teach robots

Explanation

Tianlan argues that the most natural way for humans to teach robots is through demonstration, similar to how humans learn physical skills like grasping or throwing by watching and practicing. This approach is more intuitive than other forms of instruction.


Evidence

Comparison to how humans learn physical skills versus computational skills, noting that demonstration works for physical tasks but not for mathematical calculations


Major discussion point

Human-Robot Interaction and Consciousness


Topics

Sociocultural | Development


Agreed with

– Daniela Rus

Agreed on

Demonstration-based learning is the most natural way for human-robot interaction


J

Jamie Heller

Speech speed

164 words per minute

Speech length

822 words

Speech time

300 seconds

AI and robotics represent unprecedented power that requires extra caution beyond treating them as ordinary tools

Explanation

Heller challenges the comparison of AI-powered robots to familiar tools like cars or chainsaws, suggesting there’s a significant debate about whether AI represents a fundamentally different level of power. She emphasizes the need to consider how much power society is giving to these robots and whether they require special precautions.


Evidence

Reference to the ongoing debate about whether AI is ‘just another tool’ or represents something more powerful that demands extra careful consideration


Major discussion point

Deployment and Safety Considerations


Topics

Legal and regulatory | Human rights


The breakthrough in robotics is the development of thinking and reasoning capabilities, not just mobility

Explanation

Heller distinguishes between earlier robotic achievements like the walking dog robots seen at Davos and the current breakthrough of robots that can think and reason independently. She emphasizes that the cognitive capabilities represent the real advancement in robotics.


Evidence

Comparison between the dog robot walking on streets at Davos (which doesn’t have a brain) and current developments in robots that can think and reason


Major discussion point

Recent Advances and Breakthroughs in Robotics


Topics

Infrastructure | Development


Human beings are remarkably complex with multiple sophisticated systems working together

Explanation

Heller observes that studying robotics reveals the extraordinary complexity of human beings, noting how multiple systems like the brain, skin sensors, and other capabilities work in harmony. This complexity becomes apparent when trying to replicate human abilities in robots.


Evidence

Reference to the various human capabilities discussed by the panelists including brain function, skin-like sensing, and integrated systems


Major discussion point

Human-Robot Interaction and Consciousness


Topics

Sociocultural | Development


A

Audience

Speech speed

153 words per minute

Speech length

361 words

Speech time

141 seconds

Remote infrastructure sites face higher deployment costs that automation could reduce

Explanation

An audience member from the solar industry points out that many infrastructure sites are located in remote areas where labor costs are significantly higher than they could be with automation. This creates a strong economic incentive for robotic solutions in infrastructure management.


Evidence

Personal experience running a startup focused on drone automation and visual AI for solar industry asset integration


Major discussion point

Current Applications and Market Reality


Topics

Economic | Infrastructure


Consciousness and awareness capabilities are key to robots handling unprogrammed events

Explanation

An audience member from Spain studying physics emphasizes that the ability for robots to react to unexpected events or accidents depends on developing some level of consciousness or awareness. This capability is seen as crucial for autonomous robot operation in unpredictable environments.


Evidence

Background as a physics PhD student interested in robotic brain development and autonomous response systems


Major discussion point

Human-Robot Interaction and Consciousness


Topics

Infrastructure | Development


Natural human-robot interaction requires understanding what makes human-human interaction natural

Explanation

An audience member researching human-robot interaction argues that to make robots capable of natural interaction with humans, we must first understand the fundamental elements that make human-to-human interaction feel natural. This understanding is essential for developing effective human-robot interfaces.


Evidence

Professional background as a researcher in human-robot interaction and global shaper from Italy


Major discussion point

Human-Robot Interaction and Consciousness


Topics

Sociocultural | Human rights


Agreements

Agreement points

Manipulation remains the most significant technical challenge in robotics

Speakers

– Daniela Rus
– Shao Tianlan

Arguments

Manipulation remains the biggest challenge due to lack of skin-like sensors for dexterous tasks


Focus on controllable environments without direct human interaction for near-term deployment


Summary

Both speakers acknowledge that manipulation (handling objects) is currently the most difficult aspect of robotics to solve, though they differ on whether human-level dexterity is necessary for useful applications


Topics

Infrastructure | Development


Physical AI and data-driven approaches are transforming robotics capabilities

Speakers

– Jake Loosararian
– Daniela Rus
– Shao Tianlan

Arguments

Physical AI emergence driven by ROI focus on data collection and decision-making capabilities


Co-designing robot bodies and brains together using AI for custom robots in hours


World models enabling complex tasks like grabbing objects from containers that seemed impossible years ago


Summary

All three experts agree that the integration of AI with physical robotics, particularly through data collection and world models, represents a fundamental breakthrough enabling previously impossible capabilities


Topics

Infrastructure | Development


Industrial and controlled environment applications are currently most viable

Speakers

– Jake Loosararian
– Daniela Rus
– Shao Tianlan

Arguments

Focus on specific ‘droid over humanoid’ applications rather than generalized robots


Industrial applications in ports, storage systems, and manufacturing already delivering value


Focus on controllable environments without direct human interaction for near-term deployment


Summary

All speakers agree that current robotic deployments should focus on specific, controlled industrial applications rather than attempting general-purpose humanoid robots


Topics

Economic | Infrastructure


Demonstration-based learning is the most natural way for human-robot interaction

Speakers

– Shao Tianlan
– Daniela Rus

Arguments

Demonstration-based learning as most intuitive way for humans to teach robots


Machines should adapt to humans rather than humans adapting to machines


Summary

Both speakers agree that robots should learn from human demonstrations and adapt to human behavior patterns rather than requiring humans to learn complex robot interfaces


Topics

Sociocultural | Development


Similar viewpoints

Both acknowledge significant gaps between laboratory demonstrations and real-world deployment, emphasizing the practical challenges of scaling robotic solutions

Speakers

– Jake Loosararian
– Daniela Rus

Arguments

Gap between research lab demonstrations and truly scaled deployment solutions


Need for teleoperation as prerequisite for full autonomy while robots learn environments


Topics

Economic | Development


Both define robot consciousness in practical terms as situational awareness and the ability to recognize when human intervention is needed

Speakers

– Daniela Rus
– Shao Tianlan

Arguments

Consciousness defined as environmental awareness and appropriate response capability


Robots need ability to detect abnormal situations and call for human intervention


Topics

Infrastructure | Cybersecurity


Both express strong optimism about the current state and future prospects of robotics, contrasting dramatically with the field’s challenges a decade ago

Speakers

– Jake Loosararian
– Shao Tianlan

Arguments

Robotics industry transformation from bleak prospects 10 years ago to current excitement


Hardest technical challenges are behind us with rapid progress ahead


Topics

Economic | Development


Unexpected consensus

Acceptance of current limitations while maintaining optimism

Speakers

– Jake Loosararian
– Daniela Rus
– Shao Tianlan

Arguments

Need for teleoperation as prerequisite for full autonomy while robots learn environments


Gap between research lab demonstrations and truly scaled deployment solutions


Robots don’t need human-level dexterity to perform many useful tasks effectively


Explanation

Despite being leaders in robotics development, all speakers openly acknowledge significant current limitations while remaining highly optimistic. This honest assessment of challenges combined with confidence in progress is unexpected from industry leaders who might be expected to oversell capabilities


Topics

Development | Economic


Emphasis on gradual, practical deployment over revolutionary breakthroughs

Speakers

– Jake Loosararian
– Daniela Rus
– Shao Tianlan

Arguments

Focus on specific ‘droid over humanoid’ applications rather than generalized robots


Graduated autonomy with various robot forms performing specific tasks reliably in hundreds of days


Industrial applications in ports, storage systems, and manufacturing already delivering value


Explanation

All speakers advocate for incremental, practical approaches rather than pursuing dramatic humanoid breakthroughs, which is unexpected given the popular media focus on general-purpose humanoid robots


Topics

Economic | Infrastructure


Overall assessment

Summary

The speakers demonstrate remarkable consensus on key technical challenges (manipulation), the importance of physical AI integration, the viability of industrial applications, and the need for gradual deployment strategies. They share optimism about recent progress while honestly acknowledging current limitations.


Consensus level

High level of consensus with complementary rather than conflicting perspectives. This strong agreement among leading experts suggests a maturing field with clear technical priorities and realistic deployment strategies, which has positive implications for sustainable robotics development and adoption.


Differences

Different viewpoints

Timeline and feasibility of humanoid robot deployment

Speakers

– Daniela Rus
– Jake Loosararian

Arguments

Humanoid home assistants remain far away despite advances in other areas


Focus on specific ‘droid over humanoid’ applications rather than generalized robots


Summary

Rus emphasizes that humanoid home assistants are still far away and questions Elon Musk’s timeline claims, while Loosararian advocates for focusing on specific robotic applications rather than generalized humanoids, suggesting a more pragmatic approach to deployment.


Topics

Economic | Development


Requirements for robot dexterity and manipulation capabilities

Speakers

– Daniela Rus
– Shao Tianlan

Arguments

Manipulation remains the biggest challenge due to lack of skin-like sensors for dexterous tasks


Robots don’t need human-level dexterity to perform many useful tasks effectively


Summary

Rus emphasizes that manipulation is the biggest challenge requiring skin-like sensors for dexterous tasks, while Tianlan argues that robots can be highly useful without human-level dexterity, demonstrating this with his three-finger experiment.


Topics

Infrastructure | Development


Approach to robot deployment and business models

Speakers

– Jake Loosararian
– Shao Tianlan

Arguments

Business model shift from selling robots to deploying and operating them for customers


Acceleration in adoption with 10,000 intelligent robots delivered in one year versus eight years for first 10,000


Summary

Loosararian advocates for not selling robots but deploying and operating them for customers to maintain control and learning, while Tianlan’s approach involves selling/delivering robots directly to customers, as evidenced by his 10,000 unit deliveries.


Topics

Economic | Infrastructure


Unexpected differences

Role and necessity of teleoperation in robot development

Speakers

– Jake Loosararian
– Shao Tianlan

Arguments

Need for teleoperation as prerequisite for full autonomy while robots learn environments


Graduated autonomy with various robot forms performing specific tasks reliably in hundreds of days


Explanation

Unexpectedly, Loosararian emphasizes teleoperation as a necessary stepping stone to autonomy, while Tianlan seems to envision more direct autonomous deployment without emphasizing the teleoperation phase. This disagreement is significant because it affects privacy concerns, deployment costs, and the timeline for truly autonomous robots.


Topics

Infrastructure | Legal and regulatory


Optimism about overcoming technical challenges

Speakers

– Daniela Rus
– Shao Tianlan

Arguments

Gap between research lab demonstrations and truly scaled deployment solutions


Hardest technical challenges are behind us with rapid progress ahead


Explanation

Despite both being technical experts, Rus maintains a cautious view about the significant gaps between research and deployment, while Tianlan expresses strong confidence that the hardest challenges are already solved. This unexpected disagreement among technical experts suggests different perspectives on the maturity of current robotics technology.


Topics

Development | Infrastructure


Overall assessment

Summary

The main areas of disagreement center on deployment timelines and approaches, technical requirements for robot capabilities, and business models for bringing robots to market. While all speakers agree on the potential and direction of robotics advancement, they differ significantly on practical implementation strategies.


Disagreement level

Moderate disagreement with significant implications. The disagreements reflect different philosophies about robot development – from cautious, research-focused approaches to aggressive deployment strategies. These differences could lead to divergent industry standards, safety protocols, and market adoption patterns, ultimately affecting how robotics technology integrates into society.


Partial agreements

Partial agreements

Similar viewpoints

Both acknowledge significant gaps between laboratory demonstrations and real-world deployment, emphasizing the practical challenges of scaling robotic solutions

Speakers

– Jake Loosararian
– Daniela Rus

Arguments

Gap between research lab demonstrations and truly scaled deployment solutions


Need for teleoperation as prerequisite for full autonomy while robots learn environments


Topics

Economic | Development


Both define robot consciousness in practical terms as situational awareness and the ability to recognize when human intervention is needed

Speakers

– Daniela Rus
– Shao Tianlan

Arguments

Consciousness defined as environmental awareness and appropriate response capability


Robots need ability to detect abnormal situations and call for human intervention


Topics

Infrastructure | Cybersecurity


Both express strong optimism about the current state and future prospects of robotics, contrasting dramatically with the field’s challenges a decade ago

Speakers

– Jake Loosararian
– Shao Tianlan

Arguments

Robotics industry transformation from bleak prospects 10 years ago to current excitement


Hardest technical challenges are behind us with rapid progress ahead


Topics

Economic | Development


Takeaways

Key takeaways

Physical AI is transitioning from concept to real-world deployment, with dramatic acceleration in adoption (10,000 robots deployed in one year vs. eight years for the first 10,000)


The robotics industry has transformed from bleak prospects 10 years ago to current excitement, with significant advances in materials, hardware, computation, data, and AI


Manipulation remains the biggest technical challenge due to lack of skin-like sensors, while locomotion and navigation have advanced significantly


Current successful applications focus on controlled industrial environments (ports, warehouses, manufacturing) rather than unstructured home environments


Teleoperation is a necessary prerequisite for full autonomy as robots learn environments, contrary to public perception of complete independence


Robots don’t need human-level dexterity or intelligence to perform many useful tasks – graduated autonomy with specific capabilities is the near-term reality


Business models are shifting from selling robots to deploying and operating robotic systems as services for customers


The hardest technical challenges are behind us, with rapid progress expected in the coming years


Resolutions and action items

Focus development on specific ‘droid over humanoid’ applications that provide clear ROI rather than generalized robots


Establish clear boundaries, rules, and safety standards for robot deployment similar to existing tools like cars


Prioritize forward deployment in actual customer environments to gather real-world data unavailable online


Develop physical AI models with embedded physics understanding that can adapt after deployment


Create uncertainty-aware robot brains that can recognize abnormal situations and call for human intervention


Unresolved issues

Timeline disagreements between panelists regarding when advanced humanoid robots will be widely deployed


Cost barriers remain significant with useful robots potentially costing hundreds of thousands of dollars


Privacy and security concerns with teleoperated robots in personal environments


The broader philosophical question of whether AI/robotics represents just another tool or requires extraordinary caution due to unprecedented power


How to bridge the gap between research lab demonstrations and truly scaled commercial deployment


Long-tail problem of handling unexpected situations in unstructured environments


Development of skin-like sensors for advanced manipulation capabilities


Suggested compromises

Deploy robots initially in controlled environments without direct human interaction before advancing to more complex scenarios


Use demonstration-based learning as the most intuitive way for humans to teach robots tasks


Build machines that adapt to humans rather than requiring humans to adapt to machines


Accept that robots with limited capabilities (like three-finger hands) can still perform many useful tasks effectively


Implement graduated autonomy where various robot forms perform specific reliable tasks rather than waiting for fully capable humanoids


Thought provoking comments

And the dirty little secret, of course is like, you know Sure, if you want to pay $40,000 for that and you want someone to be able to see what’s going on in your house Maybe we’re getting out of the shower and you know, there’s someone teleoperating your robot

Speaker

Jake Loosararian


Reason

This comment cuts through the hype around autonomous robots by revealing the uncomfortable reality that many ‘autonomous’ robots still require human teleoperation. It introduces crucial considerations about privacy, cost, and the gap between marketing promises and technical reality.


Impact

This shifted the conversation from technical capabilities to practical deployment realities and ethical concerns. It forced the discussion to confront the disconnect between public expectations and current limitations, leading to a more honest assessment of where robotics actually stands today.


I just wanted to add that there are so many extraordinary applications and deployments that we already have in the world today… our company Venti Technologies is automating port operations… Symbotic automates storage systems, and they move millions of boxes every day

Speaker

Daniela Rus


Reason

This comment strategically redirected the conversation from futuristic speculation to concrete, real-world value creation. It demonstrated that robotics is already delivering significant economic impact in industrial settings, countering the narrative that robotics is still primarily experimental.


Impact

This grounded the discussion in current reality and shifted focus from consumer applications to industrial success stories. It established a foundation for discussing what works now versus what remains challenging, influencing subsequent questions about deployment in different environments.


I personally have tried to live with only three fingers and only do this action. I personally have tried that for several hours. Life is just fine… so this, I think, justifies my opinion that we don’t need, say, human level dexterity to enable robot to do many useful things

Speaker

Shao Tianlan


Reason

This is a remarkably creative and practical approach to understanding robot limitations. By literally constraining himself to robot-like capabilities, Tianlan provided empirical evidence that perfect human replication isn’t necessary for useful functionality.


Impact

This comment fundamentally reframed the discussion from ‘robots need to be like humans’ to ‘robots need to be useful within their constraints.’ It challenged the perfectionist mindset and opened up discussion about graduated autonomy and specialized applications rather than general-purpose humanoids.


So robots are made primarily from hard plastics and metal. And those materials are more like fingernails than the skin and flesh we have on our fingertips… just try manipulating your phone with your fingernails, not with your skin

Speaker

Daniela Rus


Reason

This analogy brilliantly made a complex technical limitation immediately understandable to any audience member. It transformed an abstract engineering challenge into a visceral, relatable experience that anyone could immediately test.


Impact

This comment created a shared understanding of manipulation challenges and likely influenced how the audience thought about robot capabilities. It provided a concrete framework for understanding why certain tasks remain difficult, making the technical discussion more accessible and memorable.


I call it droid over humanoid… Typically, robotics companies will make a single product and want to make a bunch of that single product, and then they end up creating a path towards… basically, they create a potential path towards just, like, you lose your unfair advantage

Speaker

Jake Loosararian


Reason

This introduced a crucial strategic insight about the robotics business model, suggesting that specialization and continuous learning from deployment environments creates more sustainable competitive advantages than trying to build general-purpose robots.


Impact

This comment shifted the conversation toward business strategy and deployment philosophy, influencing how the panel discussed the path from current capabilities to future applications. It introduced the concept that successful robotics companies need to think beyond individual products to integrated systems and data collection.


We don’t need Einstein level of intelligence. Because I really like visiting zoos… we can find animals that don’t know any idea about language doing perfect manipulation. Like a squirrel monkey, this big, probably with a 30 gram brain can do perfect manipulation

Speaker

Shao Tianlan


Reason

This biological analogy provided a refreshing perspective on the intelligence requirements for robotics, suggesting that the bar for useful manipulation might be much lower than commonly assumed. It challenged the assumption that complex AI is always necessary.


Impact

This comment helped demystify robot intelligence requirements and supported the theme of graduated autonomy. It provided an optimistic counterpoint to discussions about technical limitations and suggested that solutions might be more achievable than expected.


Overall assessment

These key comments fundamentally shaped the discussion by consistently pulling it away from futuristic speculation toward practical realities. The conversation evolved from initial excitement about AI advances to a more nuanced understanding of current limitations, deployment challenges, and realistic timelines. Jake’s teleoperation revelation and Daniela’s fingernail analogy created moments of clarity that grounded the discussion in current technical realities. Tianlan’s three-finger experiment and zoo observations provided creative frameworks for thinking about robot capabilities differently. Together, these insights created a more honest, practical dialogue that balanced optimism with realism, ultimately making the complex field of robotics more accessible and understandable to the audience while highlighting both the genuine progress made and the significant challenges that remain.


Follow-up questions

What is the role of teleoperation in robotics and how should people understand its current necessity for robot learning?

Speaker

Jake Loosararian


Explanation

Jake raised concerns about the ‘dirty little secret’ that many humanoid robots still require human teleoperation, and people don’t realize this is a prerequisite for full autonomy as robots learn environments


What is the technical plan for manipulation and robot hands, especially given the timeline promises?

Speaker

Daniela Rus


Explanation

Daniela would ask Elon Musk about the specific technical roadmap for robot manipulation capabilities, noting the gap between ambitious timelines and current technical limitations


How can we develop better skin-like sensors for robot manipulation?

Speaker

Daniela Rus


Explanation

She identified the lack of skin-like sensing as a major technical challenge preventing robots from achieving human-level dexterity in manipulation tasks


How close are we to broader automation in infrastructure management in remote areas?

Speaker

Vanessa Mendez (Audience)


Explanation

An audience member working in drone automation for solar assets asked about the timeline for autonomous systems in remote infrastructure applications


What level of consciousness can robots achieve and how long would it take to reach human-level consciousness?

Speaker

Miguel (Audience)


Explanation

A physics PhD student asked about the potential for robotic consciousness and its timeline, which the panelists noted requires first defining what consciousness means


What makes human-human interaction natural and what are the biggest challenges for natural human-robot interaction?

Speaker

Giulia (Audience)


Explanation

A researcher in human-robot interaction asked about the fundamental aspects of natural interaction and the technical challenges in replicating this with robots


How can robots better understand and respond to uncertainty in their predictions?

Speaker

Daniela Rus


Explanation

She identified building uncertainty understanding within robot brains as an exciting area that would lead to more capable and safer robots


How can we develop robots that can detect what is normal versus abnormal in their environment?

Speaker

Shao Tianlan


Explanation

He emphasized the importance of robots having consciousness-like abilities to evaluate risk and detect abnormal situations for reliable and safe operation


How can we build machines that adapt to humans rather than humans adapting to machines?

Speaker

Daniela Rus


Explanation

She identified this as a key challenge requiring advances in perception, situation awareness, and activity recognition for truly collaborative human-robot interaction


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.