BY STEPHANIE MLOT
“We showed how neural networks and memory systems can be combined to make learning machines that can store knowledge quickly and reason about it flexibly,” DeepMind researchers Alexander Graves and Greg Wayne wrote in a blog post. “These models … can learn from examples like neural networks, but they can also store complex data like computers.”
The DNC, for example, can store a roster of sports players and read out each name individually. It can also process more complicated tasks like deducing ancestral relationships based on a family tree (video above).
The real test, however, is a graph—one of the most complex and general data structures—like the London Underground network. After feeding the computer a layout of the Tube, the machine was able to recall a memory of the map, then answer intricate questions: “Starting at Bond Street and taking the Central line in a direction for one stop, the Circle line in a direction for four stops, and the Jubilee line in a direction for two stops, at what stop do you end up?”
DNC can also plan routes given a query like, “How do you get from Moorgate to Piccadilly Circus?”
“Conventional neural networks in our comparisons either could not store the information, or they could not learn to reason in a way that would generalize to new examples,” Graves and Wayne said.
The scientists found they could also train via reinforcement learning: let the computer produce actions but never show it the answer, like the children’s game “hot or cold” (video below). In testing, DNC was able to store several subroutines—one per possible goal—in memory and execute one or another.
“The question of how human memory works is ancient and our understanding still developing,” the researchers said. “We hope that DNCs provide both a new tool for computer science and a new metaphor for cognitive science and neuroscience: Here is a learning machine that, without prior programming, can organize information into connected facts and use those facts to solve problems.”