A team of researchers from DeepMind has developed an artificial intelligence (AI) system that brings AI agents closer to being able to understand 3D spaces. The system, called the Generative Query Network (GQN), empowers machine with the ability to learn to represent scenes using only their own sensors. The system uses images of a scene taken from different angles, constructs an internal representation based on this input, and uses the representation to predict the appearance of that scene from a new, arbitrary angle. Existing AI systems can also perform scene representation, but they need to be provided with large amounts of labelled datasets, making human input a requirement. The GQN does not need such labelled data: it uses its sensors to 'look' at static images of a scene and reconstruct a reasonably accurate 3D representation of it. As explained by Techcrunch, this ability 'is critical for robots, especially because they have to navigate the real world by sensing it and reacting to what they see'. The developers of GQN acknowledged that more work is needed before the system can be deployed in the real world, but noted that the achievement 'represents one step closer to understanding how we may build agents that learn by themselves'.