Google unveils new Gemini Robotics models

Gemini Robotics 1.5 and ER 1.5 combine advanced reasoning with motor execution, making robots more capable of tackling complex missions.

Google has launched two new robotics models designed to improve robots’ reasoning, planning, and ability to complete multi-step tasks.

Google has unveiled two new robotics models, Gemini Robotics 1.5 and Gemini Robotics-ER 1.5, designed to help robots better perceive, plan, and act in complex environments. The models aim to enable more capable robots to complete multi-step tasks efficiently and transparently.

Gemini Robotics 1.5 converts visual information and instructions into actions, letting robots think before acting and explain their reasoning. Gemini Robotics-ER 1.5 acts as a high-level planner, reasoning about the physical world and using tools like Google Search to support decisions.

Together, the models form an ‘agentic’ framework. ER 1.5 orchestrates a robot’s activities, while Robotics 1.5 carries them out, enabling the machines to tackle semantically complex tasks. The pairing strengthens generalisation across diverse environments and longer missions.

Google said Gemini Robotics-ER 1.5 is now available to developers through the Gemini API in Google AI Studio, while Gemini Robotics 1.5 is currently open to select partners. Both models advance robots’ reasoning, spatial awareness, and multi-tasking capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot