MIT creates AI tool to build virtual worlds for robots
The new method uses diffusion models to generate lifelike kitchens, restaurants and homes.
Researchers at MIT’s Computer Science and AI Laboratory have developed a new AI system that can build realistic virtual environments for training robots. The tool, called steerable scene generation, creates kitchens, restaurants and living rooms filled with 3D objects where robots interact with the physical world.
The system uses a diffusion model guided by Monte Carlo tree search to produce scenes that follow real-world physics. Unlike traditional simulations, it can accurately position objects and avoid visual errors such as items overlapping or floating unrealistically.
By generating millions of unique, lifelike environments, the system can dramatically increase the training data available for robotic foundation models. Robots trained in these AI settings can practise everyday actions like stacking plates or placing cutlery with greater precision.
The researchers say the technique allows robots to learn more efficiently without the cost or limits of real-world testing. Future work aims to include movable objects and internet-sourced assets to make the simulations even more dynamic and diverse.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!