MIT introduces rapid object creation using AI
Voice commands are converted into digital models and assembly instructions before a robot constructs the item, removing the need for specialised technical skills.
MIT researchers have created a speech-driven system that uses AI and robotics to build physical objects in minutes. Users provide a spoken request, and a robotic arm constructs items such as stools, shelves or decorative pieces from modular components.
The workflow turns spoken input into a digital mesh, divides it into parts and adjusts the design for real-world fabrication. An automated sequence directs the robot to assemble the object, enabling quick production without modelling or robotics expertise.
The modular approach reduces waste by allowing components to be disassembled and reused. The team also plans enhancements to improve structural strength and extend the system to larger-scale applications.
Researchers are also working on combining speech with gesture control to offer more intuitive interaction between humans, AI and robots.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
