Saturday, August 13, 2016

Google Is Making Its AI Read Romance Novels: A Brief History Of General Intelligence

From JSTOR|daily:

Will Reading Romance Novels Make Artificial Intelligence More Human?
Enigma machine
This past spring, Google began feeding its natural language algorithm thousands of romance novels in an effort to humanize its “conversational tone.” The move did so much to fire the collective comic imagination that the ensuing hilarity muffled any serious commentary on its symbolic importance. The jokes, as they say, practically wrote themselves. But, after several decades devoted to task-specific “smart” technologies (GPS, search engine optimization, data mining), Google’s decision points to a recovered interest among the titans of technology in a fully anthropic “general” intelligence, the kind dramatized in recent films such as Her (2013) and Ex Machina (2015). Amusing though it may be, the appeal to romance novels suggests that Silicon Valley is daring to dream big once again.

The desire to automate solutions to human problems, from locomotion (the wheel) to mnemonics (the stylus), is as old as society itself. Aristotle, for example, sought to describe the mysteries of human cognition so precisely that it could be codified as a set of syllogisms, or building blocks of knowledge, bound together by algorithms to form the high-level insights and judgments that we ordinarily associate with intelligence.

Two millennia later, the German polymath Gottfried Wilhelm Leibniz dreamed of a machine called the calculus ratiocinator that would be programmed according to these syllogisms in the hope that, thereafter, all of the remaining problems in philosophy could be resolved with a turn of the crank.

But there is more to intelligence than logic. Logic, after all, can only operate on already categorized signs and symbols. Even if we very generously grant that we are, as Descartes claimed in 1637, essentially thinking machines divided into body and mind, and even if we grant that the mind is a tabula rasa, as Locke argued a half-century later, the question remains: How do categories and content—the basic tools and materials of logic—come to mind in the first place? How, in other words, do humans comprehend and act upon the novel and the unknown? Such questions demand a fully contoured account of the brain—how it responds to its environment, how it makes connections, and how it encodes memories.
* * *
The American pioneers of artificial intelligence did not regard AI as an exclusively logical or mathematical problem. It was an interdisciplinary affair from the opening gun. The neuroscientist Karl Lashley, for example, contributed a paper at an early AI symposium that prompted one respondent to thank him for “plac[ing] rigorous limitations upon the free flight of our fancy in designing models of the nervous system, for no model of the nervous system can be true unless it incorporates the properties here described for the real nervous system.” The respondent, as it happens, was a zoologist; the fanciful models to which he was referring were “neural networks” or “neural nets,” an imaginative sally of two midcentury American mathematicians, Warren McCulloch and Walter Pitts. Neural networks were then popularized in 1949, when the Canadian psychologist Donald Hebb used them to produce a theoretical construction of how learning functions in the brain. But if neural nets were the original love child of neuroscience and artificial intelligence, they seemed, for quite some time, destined to be a stillborn. Neuroscience quickly and thoroughly exposed the difficulties of creating a model of an organism that, with its 100 billion neurons, was far more complex than anything McCulloch and Pitts could have imagined. Discouraged, programmers began to wonder if they might get on without one.

The effort to achieve a fully general intelligence while sidestepping the inconveniences of neurobiology found its first and most enduring expression in a 1950 paper by the British mathematician Alan Turing. Turing laid out the rules for a test that requires a human interrogator to converse, by text message, with two or more interlocutors—some human, some machine. According to the rules, the machine is deemed artificially intelligent only if it is indistinguishable from its human peers. The mainstreaming of this “imitation game” as a standard threshold test, whereby the explicit goal is really humanness rather than intelligence, reveals a romantic strand in the genealogy of midcentury artificial intelligence. Dating at least as far back as the Sanhedrin tractate of the Talmud (2nd century)—in which Adam was conceived from mud as a golem—the ultimate triumph, by the light of this tradition, is to assume Godlike power by animating an object with the uniquely human capacity to feel, to know, and to love. Thus, the popular adoption of the Turing test betrays in the early pioneers of artificial intelligence the shadowy presence of some of the same fancies that moved Victor Frankenstein....MUCH MORE
Previously:
May 5
Google’s Artificial Intelligence Has Read Enough Romance Novels to Write One On its Own (GOOG)