Tuesday, October 18, 2016

Artificial Intelligence: Google's DeepMind No Longer Needs Humans to Help It Learn

And so it begins.
From the DeepMind blog:

Differentiable neural computers
In a recent study in Nature, we introduce a form of memory-augmented neural network called a differentiable neural computer, and show that it can learn to use its memory to answer questions about complex, structured data, including artificially generated stories, family trees, and even a map of the London Underground. We also show that it can solve a block puzzle game using reinforcement learning.
Plato likened memory to a wax tablet on which an impression, imposed on it once, would remain fixed. He expressed in metaphor the modern notion of plasticity – that our minds can be shaped and reshaped by experience. But the wax of our memories does not just form impressions, it also forms connections, from one memory to the next. Philosophers like John Locke believed that memories connected if they were formed nearby in time and space. Instead of wax, the most potent metaphor expressing this is Marcel Proust’s madeleine cake; for Proust, one taste of the confection as an adult undammed a torrent of associations from his childhood. These episodic memories (event memories) are known to depend on the hippocampus in the human brain. 
Today, our metaphors for memory have been refined. We no longer think of memory as a wax tablet but as a reconstructive process, whereby experiences are reassembled from their constituent parts. And instead of a simple association between stimuli and behavioural responses, the relationship between memories and action is variable, conditioned on context and priorities. A simple article of memorised knowledge, for example a memory of the layout of the London Underground, can be used to answer the question, “How do you get from Piccadilly Circus to Moorgate?” as well as the question, “What is directly adjacent to Moorgate, going north on the Northern Line?”. It all depends on the question; the contents of memory and their use can be separated. Another view holds that memories can be organised in order to perform computation. More like lego than wax, memories can be recombined depending on the problem at hand. 
Neural networks excel at pattern recognition and quick, reactive decision-making, but we are only just beginning to build neural networks that can think slowly – that is, deliberate or reason using knowledge. For example, how could a neural network store memories for facts like the connections in a transport network and then logically reason about its pieces of knowledge to answer questions? In a recent paper, we showed how neural networks and memory systems can be combined to make learning machines that can store knowledge quickly and reason about it flexibly. These models, which we call differentiable neural computers (DNCs), can learn from examples like neural networks, but they can also store complex data like computers.

In a normal computer, the processor can read and write information from and to random access memory (RAM). RAM gives the processor much more space to organise the intermediate results of computations. Temporary placeholders for information are called variables and are stored in memory. In a computer, it is a trivial operation to form a variable that holds a numerical value. And it is also simple to make data structures – variables in memory that contain links that can be followed to get to other variables. One of the simplest data structures is a list – a sequence of variables that can be read item by item. For example, one could store a list of players’ names on a sports team and then read each name one by one. A more complicated data structure is a tree. In a family tree for instance, links from children to parents can be followed to read out a line of ancestry. One of the most complex and general data structures is a graph, like the London Underground network.
When we designed DNCs, we wanted machines that could learn to form and navigate complex data structures on their own. At the heart of a DNC is a neural network called a controller, which is analogous to the processor in a computer. A controller is responsible for taking input in, reading from and writing to memory, and producing output that can be interpreted as an answer. The memory is a set of locations that can each store a vector of information....MUCH MORE
HT: The Next Web's "Google’s ‘DeepMind’ AI platform can now learn without human input"

I blame the human enablers for what's coming:

AI software should be able to register its own patents, law prof argues

Tieto appoints bot to leadership team
Sandinavian tech firm Tieto has appointed an artificical intelligence agent to the leadership team of a new data-driven business unit, giving the bot the opportunity to participate in team meetings and cast a vote on business direction....