Sunday, July 22, 2018

To Make Sense of the Present, Brains May Predict the Future

From Quanta:

A controversial theory suggests that perception, motor control, memory and other brain functions all depend on comparisons between ongoing actual experiences and the brain’s modeled expectations.
https://d2r55xnwy6nx47.cloudfront.net/uploads/2018/07/Fish_2880x1620_01.gif
Some neuroscientists favor a predictive coding explanation for how the brain works, in which perception may be thought of as a “controlled hallucination.” This theory emphasizes the brain’s expectations and predictions about reality rather than the direct sensory evidence that the brain receives.
Last month, the artificial intelligence company DeepMind introduced new software that can take a single image of a few objects in a virtual room and, without human guidance, infer what the three-dimensional scene looks like from entirely new vantage points. Given just a handful of such pictures, the system, dubbed the Generative Query Network, or GQN, can successfully model the layout of a simple, video game-style maze.

There are obvious technological applications for GQN, but it has also caught the eye of neuroscientists, who are particularly interested in the training algorithm it uses to learn how to perform its tasks. From the presented image, GQN generates predictions about what a scene should look like — where objects should be located, how shadows should fall against surfaces, which areas should be visible or hidden based on certain perspectives — and uses the differences between those predictions and its actual observations to improve the accuracy of the predictions it will make in the future. “It was the difference between reality and the prediction that enabled the updating of the model,” said Ali Eslami, one of the project’s leaders.

According to Danilo Rezende, Eslami’s co-author and DeepMind colleague, “the algorithm changes the parameters of its [predictive] model in such a way that next time, when it encounters the same situation, it will be less surprised.”

Neuroscientists have long suspected that a similar mechanism drives how the brain works. (Indeed, those speculations are part of what inspired the GQN team to pursue this approach.) According to this “predictive coding” theory, at each level of a cognitive process, the brain generates models, or beliefs, about what information it should be receiving from the level below it. These beliefs get translated into predictions about what should be experienced in a given situation, providing the best explanation of what’s out there so that the experience will make sense. The predictions then get sent down as feedback to lower-level sensory regions of the brain. The brain compares its predictions with the actual sensory input it receives, “explaining away” whatever differences, or prediction errors, it can by using its internal models to determine likely causes for the discrepancies. (For instance, we might have an internal model of a table as a flat surface supported by four legs, but we can still identify an object as a table even if something else blocks half of it from view.)

Gif of a 3D neural rendering being rotated

Given a two-dimensional image of a pattern of blocks (left), the General Query Network artificial intelligence can infer their three-dimensional arrangement in space (right). The system relies on some of the same fundamental insights that underlie the neuroscience theory known as predictive coding.
The prediction errors that can’t be explained away get passed up through connections to higher levels (as “feedforward” signals, rather than feedback), where they’re considered newsworthy, something for the system to pay attention to and deal with accordingly. “The game is now about adjusting the internal models, the brain dynamics, so as to suppress prediction error,” said Karl Friston of University College London, a renowned neuroscientist and one of the pioneers of the predictive coding hypothesis.

Over the past decade, cognitive scientists, philosophers and psychologists have taken up predictive coding as a compelling idea, especially for describing how perception works, but also as a more ambitious, all-encompassing theory about what the entire brain is doing. Experimental tools have only recently made it possible to start directly testing specific mechanisms of the hypothesis, and some papers published in the past two years have provided striking evidence for the theory. Even so, it remains controversial, as is perhaps best evidenced by a recent debate over whether some landmark results were replicable.

Coffee, Cream and Dogs
“I take coffee with cream and ____.” It seems only natural to fill in the blank with “sugar.” That’s the instinct cognitive scientists Marta Kutas and Steven Hillyard of the University of California, San Diego, were banking on in 1980 when they performed a series of experiments in which they presented the sentence to people, one word at a time on a screen, and recorded their brain activity. Only, instead of ending with “sugar,” when the last word popped into place, the sentence read: “I take coffee with cream and dog.”

The researchers observed a greater brain response when the study’s subjects came across the unexpected word “dog,” characterized by a specific pattern of electrical activity, known as the “N400 effect,” that peaked approximately 400 milliseconds after the word was revealed. But how to interpret it remained unclear. Was the brain reacting because the word’s meaning was nonsensical in the context of the sentence? Or might it have been reacting because the word was simply unanticipated, violating whatever predictions the brain had made about what to expect?...
...MUCH MORE

So everyone is walking around having controlled hallucinations.
That would explain a lot.