Sounding like it leapt straight from the pages of a Marvel comic or a utopian science fiction novel, Google’s new AI project can actually place itself in hypothetical space, imagining what a scene might look like from an entirely different angle, according to the New Scientist. The project is codenamed DeepMind, and the artificial intelligence has improvised a method for placing its perspective in a space that does not yet exist, a vantage point from an imagined point of view. In other words, visual empathy.
Shown a two-dimensional photograph of a simple room with basic furniture placed just so, DeepMind is able to shift perspective to an imaginary point in the photograph and reproduce the three-dimensional space from the new position. Imagine standing in front of a phone booth, and imagining what it must look like to look back at yourself and behind yourself from within the phone book. Those with good imaginations and recollections could construct a viable scene, and this is precisely what DeepMind will be able to do, with increasing and unflappable accuracy as time and tweaks go on.
The Generative Query Network, published today in @ScienceMagazine, learns without human supervision to (1) describe scene elements abstractly, and (2) 'imagine' unobserved parts of the scene by rendering from any camera angle. @arkitus @DeepSpiker
— DeepMind (@DeepMindAI) June 14, 2018
Shadows, size, and color can all be construed and estimated by the Google engineered algorithm, according to Venture Beat. To teach and train the GQN or Generative Query Network, researchers supplied the artificial intelligence with very simple scenes constructed in a defined play area. Simple spheres and columns of different colors were placed within the experimental area, and the GQN was presented with a 2D snapshot to base its initial assessment from. Then, exploring the space with a robot arm capable of manipulating the objects in the scene, DeepMind was able to determine how accurate its neural learning estimate had been.
The GQN is the backbone of DeepMind’s spatial reasoning and predictive abilities. Other modules for the comprehensive AI may be in development in tandem with the spatial aspect of the GQN, though reportage is sparse on the matter as yet.
“Much like infants and animals, the GQN learns by trying to make sense of its observations of the world around it,” DeepMind researchers wrote in a blog post. “In doing so, the GQN learns about plausible scenes and their geometrical properties, without any human labeling of the contents of scenes … [T]he GQN learns about plausible scenes and their geometrical properties … without any human labeling of the contents of scenes.”
While the nuances of human emotion, sympathy, and empathy elude contemporary iterations of artificial intelligence, there is no firm consensus as to whether or not this will ever be possible.
Google’s DeepMind was an acquisition made by the tech titan in 2014, taking over DeepMind Technologies Limited. Since then, the AI has been responsible for many public appearances, including an iteration named AlphaGo, which defeated several world champions in the popular game of Go without too much trouble, having employed a supervised learning protocol.