Showing posts sorted by relevance for query kasparov. Sort by date Show all posts
Showing posts sorted by relevance for query kasparov. Sort by date Show all posts

Sunday, September 10, 2017

A Review of Garry Kasparov’s Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins

Mr. Kasparov was a pretty good chess player:
"From 1986 until his retirement in 2005, Kasparov was ranked world No. 1 for 225 out of 228 months. His peak rating of 2851,achieved in 1999, was the highest recorded until being surpassed by Magnus Carlsen in 2013." -Wikipedia
But he should probably also be known for his twitter feed, last seen in our post:

The Challenges And Triumphs Of Expanding A Family-Owned Winery
Tell me about it.*
*****
*Just kidding, despite making installment payments that could have bought France, I don't own a vineyard.

I wanted a chance to reprise one of the all-time greatest retweet comments, this one from former chess World Champion Garry Kasparov in response to The Onion:
Prompting a world-weary:
And from the Los Angeles Review of Books, June 29, 2017 the headline story:

A Brutal Intelligence: AI, Chess, and the Human Mind
CHESS IS THE GAME not just of kings but of geniuses. For hundreds of years, it has served as standard and symbol for the pinnacles of human intelligence. Staring at the pieces, lost to the world, the chess master seems a figure of pure thought: brain without body. It’s hardly a surprise, then, that when computer scientists began to contemplate the creation of an artificial intelligence in the middle years of the last century, they adopted the chessboard as their proving ground. To build a machine able to beat a skilled human player would be to fabricate a mind. It was a compelling theory, and to this day it shapes public perceptions of artificial intelligence. But, as the former world chess champion Garry Kasparov argues in his illuminating new memoir Deep Thinking, the theory was flawed from the start. It reflected a series of misperceptions — about chess, about computers, and about the mind.

At the dawn of the computer age, in 1950, the influential Bell Labs engineer Claude Shannon published a paper in Philosophical Magazine called “Programming a Computer for Playing Chess.” The creation of a “tolerably good” computerized chess player, he argued, was not only possible but would also have metaphysical consequences. It would force the human race “either to admit the possibility of a mechanized thinking or to further restrict [its] concept of ‘thinking.’” He went on to offer an insight that would prove essential both to the development of chess software and to the pursuit of artificial intelligence in general. A chess program, he wrote, would need to incorporate a search function able to identify possible moves and rank them according to how they influenced the course of the game. He laid out two very different approaches to programming the function. “Type A” would rely on brute force, calculating the relative value of all possible moves as far ahead in the game as the speed of the computer allowed. “Type B” would use intelligence rather than raw power, imbuing the computer with an understanding of the game that would allow it to focus on a small number of attractive moves while ignoring the rest. In essence, a Type B computer would demonstrate the intuition of an experienced human player.

When Shannon wrote his paper, he and everyone else assumed that the Type A method was a dead end. It seemed obvious that, under the time restrictions of a competitive chess game, a computer would never be fast enough to extend its analysis more than a few turns ahead. As Kasparov points out, there are “over 300 billion possible ways to play just the first four moves in a game of chess, and even if 95 percent of these variations are terrible, a Type A program would still have to check them all.” In 1950, and for many years afterward, no one could imagine a computer able to execute a successful brute-force strategy against a good player. “Unfortunately,” Shannon concluded, “a machine operating according to the Type A strategy would be both slow and a weak player.”
Type B, the intelligence strategy, seemed far more feasible, not least because it fit the scientific zeitgeist. Fascination with digital computers intensified during the 1950s, and the so-called “thinking machines” began to influence theories about the human mind. Many scientists and philosophers came to assume that the brain must work something like a digital computer, using its billions of networked neurons to calculate thoughts and perceptions. Through a curious kind of circular logic, this analogy in turn guided the early pursuit of artificial intelligence: if you could figure out the codes that the brain uses in carrying out cognitive tasks, you’d be able to program similar codes into a computer. Not only would the machine play chess like a master, but it would also be able to do pretty much anything else that a human brain can do. In a 1958 paper, the prominent AI researchers Herbert Simon and Allen Newell declared that computers are “machines that think” and, in the near future, “the range of problems they can handle will be coextensive with the range to which the human mind has been applied.” With the right programming, a computer would turn sapient.

¤
It took only a few decades after Shannon wrote his paper for engineers to build a computer that could play chess brilliantly. Its most famous victim: Garry Kasparov.
One of the greatest and most intimidating players in the history of the game, Kasparov was defeated in a six-game bout by the IBM supercomputer Deep Blue in 1997. Even though it was the first time a machine had beaten a world champion in a formal match, to computer scientists and chess masters alike the outcome wasn’t much of a surprise. Chess-playing computers had been making strong and steady gains for years, advancing inexorably up the ranks of the best human players. Kasparov just happened to be in the right place at the wrong time.

But the story of the computer’s victory comes with a twist. Shannon and his contemporaries, it turns out, had been wrong. It was the Type B approach — the intelligence strategy — that ended up being the dead end. Despite their early optimism, AI researchers utterly failed in getting computers to think as people do. Deep Blue beat Kasparov not by matching his insight and intuition but by overwhelming him with blind calculation. Thanks to years of exponential gains in processing speed, combined with steady improvements in the efficiency of search algorithms, the computer was able to comb through enough possible moves in a short enough time to outduel the champion. Brute force triumphed. “It turned out that making a great chess-playing computer was not the same as making a thinking machine on par with the human mind,” Kasparov reflects. “Deep Blue was intelligent the way your programmable alarm clock is intelligent.”

The history of computer chess is the history of artificial intelligence....MUCH MORE
HT: Rough Type who we will be visiting again tomorrow.

Previously on Mr. Kasparov;
How human traders will beat the machines

And on Claude Shannon the Bell Labs polymath genius:
"Claude Shannon, the Las Vegas Shark"
"How Information Got Re-Invented"
The Bit Bomb: The True Nature of Information
"How did Ed Thorp Win in Blackjack and the Stock Market?"
How Big Data and Poker Playing Bots Are Taking the Luck Out of Gambling

Friday, February 9, 2024

Garry Kasparov’s Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins

A repost from 2017.

Mr. Kasparov was a pretty good chess player:

"From 1986 until his retirement in 2005, Kasparov was ranked world No. 1 for 225 out of 228 months. His peak rating of 2851,achieved in 1999, was the highest recorded until being surpassed by Magnus Carlsen in 2013." -Wikipedia
But he should probably also be known for his twitter feed, last seen in our post:

The Challenges And Triumphs Of Expanding A Family-Owned Winery
Tell me about it.*
*****
*Just kidding, despite making installment payments that could have bought France, I don't own a vineyard.

I wanted a chance to reprise one of the all-time greatest retweet comments, this one from former chess World Champion Garry Kasparov in response to The Onion:

Prompting a world-weary: 

And from the Los Angeles Review of Books, June 29, 2017 the headline story:

A Brutal Intelligence: AI, Chess, and the Human Mind

CHESS IS THE GAME not just of kings but of geniuses. For hundreds of years, it has served as standard and symbol for the pinnacles of human intelligence. Staring at the pieces, lost to the world, the chess master seems a figure of pure thought: brain without body. It’s hardly a surprise, then, that when computer scientists began to contemplate the creation of an artificial intelligence in the middle years of the last century, they adopted the chessboard as their proving ground. To build a machine able to beat a skilled human player would be to fabricate a mind. It was a compelling theory, and to this day it shapes public perceptions of artificial intelligence. But, as the former world chess champion Garry Kasparov argues in his illuminating new memoir Deep Thinking, the theory was flawed from the start. It reflected a series of misperceptions — about chess, about computers, and about the mind.

At the dawn of the computer age, in 1950, the influential Bell Labs engineer Claude Shannon published a paper in Philosophical Magazine called “Programming a Computer for Playing Chess.” The creation of a “tolerably good” computerized chess player, he argued, was not only possible but would also have metaphysical consequences. It would force the human race “either to admit the possibility of a mechanized thinking or to further restrict [its] concept of ‘thinking.’” He went on to offer an insight that would prove essential both to the development of chess software and to the pursuit of artificial intelligence in general. A chess program, he wrote, would need to incorporate a search function able to identify possible moves and rank them according to how they influenced the course of the game. He laid out two very different approaches to programming the function. “Type A” would rely on brute force, calculating the relative value of all possible moves as far ahead in the game as the speed of the computer allowed. “Type B” would use intelligence rather than raw power, imbuing the computer with an understanding of the game that would allow it to focus on a small number of attractive moves while ignoring the rest. In essence, a Type B computer would demonstrate the intuition of an experienced human player.

When Shannon wrote his paper, he and everyone else assumed that the Type A method was a dead end. It seemed obvious that, under the time restrictions of a competitive chess game, a computer would never be fast enough to extend its analysis more than a few turns ahead. As Kasparov points out, there are “over 300 billion possible ways to play just the first four moves in a game of chess, and even if 95 percent of these variations are terrible, a Type A program would still have to check them all.” In 1950, and for many years afterward, no one could imagine a computer able to execute a successful brute-force strategy against a good player. “Unfortunately,” Shannon concluded, “a machine operating according to the Type A strategy would be both slow and a weak player.”  
Type B, the intelligence strategy, seemed far more feasible, not least because it fit the scientific zeitgeist. Fascination with digital computers intensified during the 1950s, and the so-called “thinking machines” began to influence theories about the human mind. Many scientists and philosophers came to assume that the brain must work something like a digital computer, using its billions of networked neurons to calculate thoughts and perceptions. Through a curious kind of circular logic, this analogy in turn guided the early pursuit of artificial intelligence: if you could figure out the codes that the brain uses in carrying out cognitive tasks, you’d be able to program similar codes into a computer. Not only would the machine play chess like a master, but it would also be able to do pretty much anything else that a human brain can do. In a 1958 paper, the prominent AI researchers Herbert Simon and Allen Newell declared that computers are “machines that think” and, in the near future, “the range of problems they can handle will be coextensive with the range to which the human mind has been applied.” With the right programming, a computer would turn sapient.

¤ It took only a few decades after Shannon wrote his paper for engineers to build a computer that could play chess brilliantly. Its most famous victim: Garry Kasparov.
One of the greatest and most intimidating players in the history of the game, Kasparov was defeated in a six-game bout by the IBM supercomputer Deep Blue in 1997. Even though it was the first time a machine had beaten a world champion in a formal match, to computer scientists and chess masters alike the outcome wasn’t much of a surprise. Chess-playing computers had been making strong and steady gains for years, advancing inexorably up the ranks of the best human players. Kasparov just happened to be in the right place at the wrong time.

But the story of the computer’s victory comes with a twist. Shannon and his contemporaries, it turns out, had been wrong. It was the Type B approach — the intelligence strategy — that ended up being the dead end. Despite their early optimism, AI researchers utterly failed in getting computers to think as people do. Deep Blue beat Kasparov not by matching his insight and intuition but by overwhelming him with blind calculation. Thanks to years of exponential gains in processing speed, combined with steady improvements in the efficiency of search algorithms, the computer was able to comb through enough possible moves in a short enough time to outduel the champion. Brute force triumphed. “It turned out that making a great chess-playing computer was not the same as making a thinking machine on par with the human mind,” Kasparov reflects. “Deep Blue was intelligent the way your programmable alarm clock is intelligent.”

The history of computer chess is the history of artificial intelligence....

...MUCH MORE 

HT: Rough Type who we will be visiting again tomorrow.

Previously on Mr. Kasparov;
How human traders will beat the machines

And on Claude Shannon the Bell Labs polymath genius:
"Claude Shannon, the Las Vegas Shark"
"How Information Got Re-Invented"
The Bit Bomb: The True Nature of Information
"How did Ed Thorp Win in Blackjack and the Stock Market?"
How Big Data and Poker Playing Bots Are Taking the Luck Out of Gambling

There was also a shout out to Shannon from the quants at Ruffer in July 17's Ruffer Review: "Navigating information" 

 

Saturday, July 24, 2021

A Review of Garry Kasparov’s Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins

A repost from 2017.

Mr. Kasparov was a pretty good chess player:

"From 1986 until his retirement in 2005, Kasparov was ranked world No. 1 for 225 out of 228 months. His peak rating of 2851,achieved in 1999, was the highest recorded until being surpassed by Magnus Carlsen in 2013." -Wikipedia
But he should probably also be known for his twitter feed, last seen in our post:

The Challenges And Triumphs Of Expanding A Family-Owned Winery
Tell me about it.*
*****
*Just kidding, despite making installment payments that could have bought France, I don't own a vineyard.

I wanted a chance to reprise one of the all-time greatest retweet comments, this one from former chess World Champion Garry Kasparov in response to The Onion: Prompting a world-weary:

And from the Los Angeles Review of Books, June 29, 2017 the headline story:

A Brutal Intelligence: AI, Chess, and the Human Mind

CHESS IS THE GAME not just of kings but of geniuses. For hundreds of years, it has served as standard and symbol for the pinnacles of human intelligence. Staring at the pieces, lost to the world, the chess master seems a figure of pure thought: brain without body. It’s hardly a surprise, then, that when computer scientists began to contemplate the creation of an artificial intelligence in the middle years of the last century, they adopted the chessboard as their proving ground. To build a machine able to beat a skilled human player would be to fabricate a mind. It was a compelling theory, and to this day it shapes public perceptions of artificial intelligence. But, as the former world chess champion Garry Kasparov argues in his illuminating new memoir Deep Thinking, the theory was flawed from the start. It reflected a series of misperceptions — about chess, about computers, and about the mind.

At the dawn of the computer age, in 1950, the influential Bell Labs engineer Claude Shannon published a paper in Philosophical Magazine called “Programming a Computer for Playing Chess.” The creation of a “tolerably good” computerized chess player, he argued, was not only possible but would also have metaphysical consequences. It would force the human race “either to admit the possibility of a mechanized thinking or to further restrict [its] concept of ‘thinking.’” He went on to offer an insight that would prove essential both to the development of chess software and to the pursuit of artificial intelligence in general. A chess program, he wrote, would need to incorporate a search function able to identify possible moves and rank them according to how they influenced the course of the game. He laid out two very different approaches to programming the function. “Type A” would rely on brute force, calculating the relative value of all possible moves as far ahead in the game as the speed of the computer allowed. “Type B” would use intelligence rather than raw power, imbuing the computer with an understanding of the game that would allow it to focus on a small number of attractive moves while ignoring the rest. In essence, a Type B computer would demonstrate the intuition of an experienced human player.

When Shannon wrote his paper, he and everyone else assumed that the Type A method was a dead end. It seemed obvious that, under the time restrictions of a competitive chess game, a computer would never be fast enough to extend its analysis more than a few turns ahead. As Kasparov points out, there are “over 300 billion possible ways to play just the first four moves in a game of chess, and even if 95 percent of these variations are terrible, a Type A program would still have to check them all.” In 1950, and for many years afterward, no one could imagine a computer able to execute a successful brute-force strategy against a good player. “Unfortunately,” Shannon concluded, “a machine operating according to the Type A strategy would be both slow and a weak player.”  
Type B, the intelligence strategy, seemed far more feasible, not least because it fit the scientific zeitgeist. Fascination with digital computers intensified during the 1950s, and the so-called “thinking machines” began to influence theories about the human mind. Many scientists and philosophers came to assume that the brain must work something like a digital computer, using its billions of networked neurons to calculate thoughts and perceptions. Through a curious kind of circular logic, this analogy in turn guided the early pursuit of artificial intelligence: if you could figure out the codes that the brain uses in carrying out cognitive tasks, you’d be able to program similar codes into a computer. Not only would the machine play chess like a master, but it would also be able to do pretty much anything else that a human brain can do. In a 1958 paper, the prominent AI researchers Herbert Simon and Allen Newell declared that computers are “machines that think” and, in the near future, “the range of problems they can handle will be coextensive with the range to which the human mind has been applied.” With the right programming, a computer would turn sapient.

¤ It took only a few decades after Shannon wrote his paper for engineers to build a computer that could play chess brilliantly. Its most famous victim: Garry Kasparov.
One of the greatest and most intimidating players in the history of the game, Kasparov was defeated in a six-game bout by the IBM supercomputer Deep Blue in 1997. Even though it was the first time a machine had beaten a world champion in a formal match, to computer scientists and chess masters alike the outcome wasn’t much of a surprise. Chess-playing computers had been making strong and steady gains for years, advancing inexorably up the ranks of the best human players. Kasparov just happened to be in the right place at the wrong time.

But the story of the computer’s victory comes with a twist. Shannon and his contemporaries, it turns out, had been wrong. It was the Type B approach — the intelligence strategy — that ended up being the dead end. Despite their early optimism, AI researchers utterly failed in getting computers to think as people do. Deep Blue beat Kasparov not by matching his insight and intuition but by overwhelming him with blind calculation. Thanks to years of exponential gains in processing speed, combined with steady improvements in the efficiency of search algorithms, the computer was able to comb through enough possible moves in a short enough time to outduel the champion. Brute force triumphed. “It turned out that making a great chess-playing computer was not the same as making a thinking machine on par with the human mind,” Kasparov reflects. “Deep Blue was intelligent the way your programmable alarm clock is intelligent.”

The history of computer chess is the history of artificial intelligence....

...MUCH MORE 

HT: Rough Type who we will be visiting again tomorrow.

Previously on Mr. Kasparov;
How human traders will beat the machines

And on Claude Shannon the Bell Labs polymath genius:
"Claude Shannon, the Las Vegas Shark"
"How Information Got Re-Invented"
The Bit Bomb: The True Nature of Information
"How did Ed Thorp Win in Blackjack and the Stock Market?"
How Big Data and Poker Playing Bots Are Taking the Luck Out of Gambling

There was also a shout out to Shannon from the quants at Ruffer in July 17's Ruffer Review: "Navigating information" 

Friday, December 11, 2020

"How a chess grandmaster tried to outwit the computer"

From Prospect Magazine, November 13, 2020:

When artificial intelligence began beating the world’s greatest players, a chess grandmaster devised his own way to give human ingenuity an upper hand against the machine. The result, however, was not quite what he expected

On Sunday 23rd July 1972, the American grandmaster Bobby Fischer made the first move of the sixth game in the world chess championship—shunting his pawn two squares up the board.

Nothing, in itself, was unusual about that. Pushing either of the middle “d” or “e” pawns two squares forward is the most common way to begin a game. But this move involved neither of these pawns and took Fischer’s opponent—the reigning world champion Boris Spassky—by complete surprise. Moreover, because he had not expected it, he had not prepared for it.

Fischer began with square c2 to c4—the “English Opening” (so called because it was a favourite of a 19th-century English chess champion, Howard Staunton). To those who don’t follow chess, it might sound a comically small twist—the same move, just one or two spaces along. But it shook everything up, and shook Spassky up in particular. During the months he had been in training, the indolent Russian had pooh-poohed the notion that he had to be ready to respond to all of white’s opening options. Fischer almost unfailingly played e4. Surely he would not unleash a new opening in the most important match of his lifetime? It’s not easy to think of analogies, but imagine a fast bowler in cricket suddenly bamboozling the batsman with an over of leg-spin.

With both sides in unfamiliar territory, the game itself proved the most beautiful of the championship. After resigning, Spassky joined the spectators in applause at his opponent’s brilliance. Fischer was now ahead in the match; six weeks later he would be crowned the 11th world champion.

A quarter of a century on, Fischer called a shock press conference in Argentina. Since his headline-grabbing battle with Spassky, the American genius had become a recluse. In the past he’d been described as “troubled,” “turbulent,” “mercurial,” and had engaged in crude antisemitism despite being of Jewish descent; it was now clear that he’d tipped into paranoia. He’d resurfaced from isolation in 1992 to play a rematch against Spassky in war-torn former Yugoslavia, in defiance of US sanctions. After winning, Fischer disappeared yet again, this time as a wanted criminal.

The 1996 Buenos Aires press conference was packed. In his meandering remarks, Fischer denounced the arrest warrant against him and complained that he’d been denied payments from various books and films that supposedly exploited his name. But eventually, he got to his point: the promotion of a new type of chess, Fischer Random, which built-in far twistier twists than his celebrated opener in 1972.

This game would be like ordinary chess in most respects. Each side would have eight pawns, arrayed on the second (white) and seventh (black) ranks. Each side would have two rooks, two knights, two bishops, a king and a queen. The pieces would move as before, and the object of the game would still be to checkmate the other side. But there would be one radical -departure: the pieces on the back ranks would be ordered—or maybe that should be disordered—randomly.

For what reason? Well, four months earlier the IBM computer, Deep Blue, had taken on the world champion Garry Kasparov. Deep Blue had humiliated Kasparov in the first game, and although it lost the series, it was clear that the era of man’s superiority over the machine was approaching its end. In 1997, Kasparov would be crushed by a new and improved Deep Blue.

One might have expected Fischer to take some schadenfreude from Kasparov’s struggles against the chess supercomputer. Fischer was a child of the Cold War, and despite the collapse of the Soviet Union five years earlier, he retained an enduring conviction that the Russians were cheats, frauds and schemers. During the Argentine press conference, he defamed his two successors, Kasparov and Anatoly Karpov—their games against each other were fixed, he said. If supposed Russian rigging were the problem, then Fischer Random could have helped: when you have no idea what the set up of the pieces is in advance, collusion becomes impossible.

But the tilting of the scales against humans of all nations was even more of an affront to Fischer. Computers, he grumbled, had an unfair edge. No human could memorise the millions of opening variations that programmers could simply enter into Deep Blue’s database. Without that advantage, he insisted, human creativity could still vanquish any silicon wannabe. His aim, then, was to provide an answer of sorts to the creeping digital dominance of the game.

Twenty-four years on and Fischer Random, though still a minority pursuit, grows ever-more popular: you can buy chess clocks that double-up as gadgets that shuffle the starting order of the pieces around. For ordinary fans, the appeal is simple: the variant rescues the top-level game from what had increasingly become a struggle between human databases.

With the assistance of chess software engines, today’s top players can spend hours on openings each day, endlessly analysing innovations that have been made in games by others, becoming encyclopaedias of past play. “It’s a lot to keep up with,” says Britain’s leading player, Michael Adams. If that’s exhausting for them, it’s also deadening for those who watch—it can mean it takes 15 or 20 moves before any novel position appears. Indeed, some games now conclude before one or even both of the players are out of their rote-learned preparation. When the player on each side of the board is going through a drill, there is little drama, and the upshot, far too often, is a crowd-displeasing draw.

Look only at how many people are playing chess, and it seems as popular as ever—there have not been many winners from the Covid-19 pandemic, but with millions stuck at home the online game has boomed. On the website chess.com, there were 204m games between humans in February 2020, but 323m by June, growth of over 50 per cent in those few locked-down months. Still, there is a nagging sense that there is something missing in the spirit of the game, particularly at the top, which has sparked many different ideas to revive it. The AI company Deep Mind has been analysing various radical options, assessing the permutations and whether potential new laws could create a dynamic but balanced game. One mooted idea is that “castling,” the manoeuvre that allows you to shuffle around a king and a rook in a single move, should be abolished. Another—which opens up what to chess players would seem like almost psychedelic strategies—permits players to capture their own pieces.

But among many weird variations, Fischer Random remains the front runner, because—by subjecting the starting position to the luck of the draw—it directly attacks the curse of over-preparation in the database age. The alien piece arrangement can flummox players from the very first move. The long years in which a grandmaster has deepened his (and it is usually “his”) knowledge of the Ruy Lopez, the Sicilian Najdorf, the Nimzo-Indian or any other openings for white and black suddenly count for nothing. All the cognitive sweat from memorising innumerable opening lines yields no advantage. The thousands of hours top players put into opening training and development are redundant: what matters is raw talent....

....MUCH MORE

Monday, January 25, 2016

"How human traders will beat the machines"

Intertemporal arbitrage?*
From the Financial Times:
Advancement of computer-driven trading systems may sow seeds of their eventual obsolescence
When Garry Kasparov sat down to play the IBM Deep Blue computer the Russian chess grandmaster believed he had discovered a strategy to turn the machine’s greatest strength into a weakness.
.
Deep Blue relied on being able to compute a vast database containing hundreds of thousands of chess games played by past grandmasters, meaning Kasparov was not simply playing one supercomputer but in fact taking on the amassed knowledge of many of the strongest players from history all at once.
.
But to be able to make use of large parts of its database, Deep Blue required its opponent to play like a typical grandmaster. If Kasparov was to intentionally make a bizarre opening play rarely seen in high level games the computer would have vast parts of its database rendered useless, as it would have fewer games to reference, and the human could regain the upper hand.
.
Kasparov’s “man verses machine” struggle with Deep Blue and his eventual defeat is a frequent refrain in discussions about the future of financial trading, and the likelihood that ever advancing technology will render the human fund manager obsolete.
.
Many human investors are enduring a miserable start to 2016 as stock markets and commodity prices have slumped. At the same time some trend following computer-driven hedge funds have made double-digit returns during the sell-off. Is man once again on the verge of being trumped by the machine like the Russian grandmaster? The answer is no. Here are three reasons why the best human investors are likely to be able to beat the “bots” for a long time to come.
.
First, human investors can follow Kasparov’s strategy and seek to use the strengths of algorithmic trading programmes against them. To paraphrase Oscar Wilde, trend-following hedge fund computers know the price of everything, but the value of nothing. This means human investors who focus on value over the long term, rather than price trends, should always be able to profit.
.
Hedge fund computers rely on their analysis of millions of data points from past market movements to use this to predict how markets will behave in the future. Broadly speaking, when any large market moves in one direction for a period of time the trend following computer will be able to profit from this, as many have done this month.
.
These computers however are not investors in the true sense of the word, but semi-automated trading bots seeking signal within market noise. Their models analyse price data, rather than creatively assessing, for example, how accurately a share that represents fractional ownership of a real business reflects the present and future value of that business....MORE
*I know I think about time-shifting more than the the average person.
Okay, truth be told, despite a predilection for really fast 'puters I obsess about outsmarting the machines.

I mean what normal person types this headline "'Facebook, Google, and the Economics of Time' (FB; GOOG)" and follows it with this opening sentence: "Although this story is not about intertemporal arbitrage I'm sure that's the first thing some of our readers thought of."

Or gets giddy with "Intertemporal Arbitrage: 'Winning Big by Playing Long-Term Trends" (CNI; PNR)?
So be it.

See also 2013's "UPDATED--The Economist On How the Commodity Quants Lost It":
...The bolded bit points up one of the failures of the fund managers.They get paid to figure out the intertemporal arbitrage, a fancy way of saying the task at hand is to understand the time period that gives the fund the greatest advantage versus the market..  
The classic example is the individual investor realizing that he can't compete with HFT and looking at longer than nanosecond time periods. This opens up the possibility of not just not-competing with the traders with the lowest latency but of taking advantage of mispricings caused by their behavior. This is exemplified by one of Buffett's baseball metaphors (he has quite a few): 
 "In investments, there's no such thing as a called strike. You can stand there at the plate and the pitcher can throw a ball right down the middle; and if it's General Motors at 47 and you don't know enough to decide on General Motors at 47, you let it go right on by and no one's going to call a strike. The only way you can have a strike is to swing and miss."
The point is, you don't have to be at the market every second You are afforded the luxury of just waiting for the perfect pitch. 

Now for a fund manager it get's tricky writing the quarterly report and saying "We didn't do much in Q3, we're waiting for Mr. Market to give us the high hanging curve ball" but if you've been honest with the investors that the tactic you've pulled from the toolbox is akin to the military's hurry-up-and-wait sense of time it is doable.

As a side note anyone who considers a move that is measured in weeks to be a trend is nuts. A trend is John Templeton going into the Japanese markets at 2 times earnings and catching a 40-fold move 1965-1989....

Sunday, June 16, 2013

"Slaves to the algorithm"

I had followed a link from Longform, "Are Coders Worth It?" about a writer choosing to be a coder (an activity we've been pitching* our younger readers to take up) and ended up at Aeon.
I had never heard of Aeon but they seem to have taken on a rather large remit (or is it ambit?).
From Aeon:

Computers could take some tough choices out of our hands, if we let them. Is there still a place for human judgement?
 
'When Garry Kasparov lost his second match against the IBM supercomputer Deep Blue in 1997, people predicted that computers would eventually destroy chess.' Photo by Jeffrey Sylvester
'When Garry Kasparov lost his second match against the IBM supercomputer Deep Blue in 1997, people predicted that computers would eventually destroy chess.' Photo by Jeffrey Sylvester
 In central London this spring, eight of the world’s greatest minds performed on a dimly lit stage in a wood-panelled theatre. An audience of hundreds watched in hushed reverence. This was the closing stretch of the 14-round Candidates’ Tournament, to decide who would take on the current chess world champion, Viswanathan Anand, later this year.

Each round took a day: one game could last seven or eight hours. Sometimes both players would be hunched over their board together, elbows on table, splayed fingers propping up heads as though to support their craniums against tremendous internal pressure. At times, one player would lean forward while his rival slumped back in an executive leather chair like a bored office worker, staring into space. Then the opponent would make his move, stop his clock, and stand up, wandering around to cast an expert glance over the positions in the other games before stalking upstage to pour himself more coffee. On a raised dais, inscrutable, sat the white-haired arbiter, the tournament’s presiding official. Behind him was a giant screen showing the four current chess positions. So proceeded the fantastically complex slow-motion violence of the games, and the silently intense emotional theatre of their players.

When Garry Kasparov lost his second match against the IBM supercomputer Deep Blue in 1997, people predicted that computers would eventually destroy chess, both as a contest and as a spectator sport. Chess might be very complicated but it is still mathematically finite. Computers that are fed the right rules can, in principle, calculate ideal chess variations perfectly, whereas humans make mistakes. Today, anyone with a laptop can run commercial chess software that will reliably defeat all but a few hundred humans on the planet. Isn’t the spectacle of puny humans playing error-strewn chess games just a nostalgic throwback?
Such a dismissive attitude would be in tune with the spirit of the times. Our age elevates the precision-tooled power of the algorithm over flawed human judgment. From web search to marketing and stock-trading, and even education and policing, the power of computers that crunch data according to complex sets of if-then rules is promised to make our lives better in every way. Automated retailers will tell you which book you want to read next; dating websites will compute your perfect life-partner; self-driving cars will reduce accidents; crime will be predicted and prevented algorithmically. If only we minimise the input of messy human minds, we can all have better decisions made for us. So runs the hard sell of our current algorithm fetish.
If we let cars do the driving, we are outsourcing not only our motor control but also our moral judgment
But in chess, at least, the algorithm has not displaced human judgment. The imperfectly human players who contested the last round of the Candidates’ Tournament — in a thrilling finish that, thanks to unusual tiebreak rules, confirmed the 22-year-old Norwegian Magnus Carlsen as the winner, ahead of former world champion Vladimir Kramnik — were watched by an online audience of 100,000 people. In fact, the host of the streamed coverage, the chatty and personable international master Lawrence Trent, pointedly refused to use a computer engine (which he called ‘the beast’) for his own analyses and predictions. The idea, he explained, is to try to figure things out for yourself. During a break in the commentary room on the day I was there, Trent was eating crisps and still eagerly discussing variations with his plummily amusing co-presenter, Nigel Short (who himself had contested the World Championship against Kasparov in 1993). ‘He’ll find Qf4; it’s not difficult to find,’ Short assured Trent. ‘Ng8, then it’s…’ ‘It’s game over.’ ‘Game over!’

Chess is an Olympian battle of wits. As with any sport, the interest lies in watching profoundly talented humans operating at the limits of their capability. There does exist a cyborg version of the game, dubbed ‘advanced chess’, in which humans are allowed to use computers while playing. But it is profoundly boring to watch, like a contest over who can use spreadsheet software more effectively, and hasn’t caught on. The ‘beast’ can be a useful helpmeet — Veselin Topalov, a previous challenger for Anand’s world title, used a 10,000-CPU monster in his preparation for that match, which he still lost — but it’s never going to be the main event.

This is a lesson that the algorithm-boosters in the wider culture have yet to learn. And outside the Platonically pure cosmos of chess, when we seek to hand over our decision-making to automatic routines in areas that have concrete social and political consequences, the results might be troubling indeed.
 
At first thought, it seems like a pure futuristic boon — the idea of a car that drives itself, currently under development by Google. Already legal in Nevada, Florida and California, computerised cars will be able to drive faster and closer together, reducing congestion while also being safer. They’ll drop you at your office then go and park themselves. What’s not to like? Well, for a start, as the mordant critic of computer-aided ‘solutionism’ Evgeny Morozov points out, the consequences for urban planning might be undesirable to some. ‘Would self-driving cars result in inferior public transportation as more people took up driving?’ he wonders in his new book, To Save Everything, Click Here (2013).

More recently, Gary Marcus, professor of psychology at New York University, offered a vivid thought experiment in The New Yorker. Suppose you are in a self-driving car going across a narrow bridge, and a school bus full of children hurtles out of control towards you. There is no room for the vehicles to pass each other. Should the self-driving car take the decision to drive off the bridge and kill you in order to save the children?...MORE
*e.g. See:
So, You Coded A Video Game, Sold Twenty Million Copies, Made A Hundred Million Bucks (last year) and You're Swedish
The Number One Skill For The Next Generation Of Billionaires
In Demand: "Hounded By Recruiters, Coders Put Themselves Up For Auction"
"How do you get a quant job on Wall Street?..."
"Can You Still Become a Quant in Your 30's?"

Wednesday, March 16, 2016

Will the Go Algorithm Use the Million Dollar Prize to Enter the World Series of Poker?

Following up on this morning's "So, What Will Google’s Winning Go Algorithm Do Now That It's Won the Million Bucks?".

From the Los Angeles Times:

A computer is now the Master of Go -- but let's see it win at poker
The worlds of Go and artificial intelligence were both unsettled by the victory this week of an advanced artificial intelligence computer over one of the world's leading masters of the intricate Japanese board game, four games to one. It's an achievement that experts in both fields didn't expect to happen for as long as 10 years.

The triumph of AlphaGo, the product of a Google lab named DeepMind, over the fourth-ranked Go champion, Lee Sedol of South Korea, is widely viewed as a landmark in artificial intelligence much greater the victory of IBM's Deep Blue over chess Grandmaster Garry Kasparov in 1997. That result, Kasparov wrote in 2010, "was met with astonishment and grief by those who took it as a symbol of mankind’s submission before the almighty computer." Go is a far more complex challenge than chess, so it's unsurprising that AlphaGo's victory is seen as bringing the era of ultimate submission that much closer.

But is it? Some AI experts are cautious. “People’s minds race forward and say, if it can beat a world champion, it can do anything,” Oren Etzioni, head of the the nonprofit Allen Institute for Artificial Intelligence in Seattle, told the journal Nature this week. The computational technique employed by AlphaGo isn't broadly applicable, he said. “We are a long, long way from general artificial intelligence.”

And there's another aspect to consider. Chess and Go are both "deterministic perfect information" games, Alan Levinovitz of Wired observed in 2014 -- games in which "no information is hidden from either player, and there are no built-in elements of chance, such as dice." How will a computer do in a game in which key information is hidden and the best players win by using the unique human skill of lying? In games such as poker, Science magazine points out, in which victory depends not on pursuing the optimal strategy, but deviating from it?

First, a few details of the game at the center of this week's match. Invented more than two millennia ago in China, Go is played on a grid of 19-by-19 lines on which two players place black or white stones, attempting to build chains that surround territory without being enveloped by their opponent. Because of the size of the board and the lack of specific rules for each piece, the number of potential moves in Go is exponentially larger than in chess.

The number of possible arrangements of stones is on the order of 10 to the 100th power; far more than the options that Deep Blue had to consider in defeating Kasparov. The route to victory at any point can be obscure; experts talk as though their play emerges from intuition, or even the subconscious, as much as experience. Indeed, during Sedol's lone victorious game, DeepMind chief Demis Hassabis tweeted that AlphaGo hadn't played a wrong move, but had become deluded into believing it was winning. Go occupies a special position in oriental life; in his classic work, "The Master of Go," the Nobel Prize-winning novelist Yasunari Kawabata spun a trenchant tale of youth versus age, the past versus the present, and the power of culture out of the game.

The DeepMind designers' solution to the intricacy of Go was to use an architecture known as neural networks. These mimic the structure of the human brain by creating connections that become stronger with experience -- in other words, to learn. AlphaGo could test millions of options and assess their outcomes rapidly, but its design allowed it to develop shortcuts to discard all but the most promising choices. Observers found the result strikingly human-like; the machine's move  No. 37 in game two was so unexpected that some commentators thought it was a mistake. Sedol left the room in shock, and later confessed, "Today I am speechless." See the move and the commentators' reactions below, at 1:18:25 of the recording:...MORE
The artificial intelligence folks still at Carnegie-Mellon after Uber bought the autonomous vehicle department have figured their road to riches is not Travis Kalanick but online Limit Hold-em.
(kidding, research purposes only. they say)
http://www.geek.com/wp-content/uploads/2015/01/poker-2.jpg 

Saturday, May 28, 2016

What To Read At Your Hamptons Vacation Rental

Following up on "Questions America Wants Answered: Is It Possible To Snag A Last-Minute Vacation Rental In the Hamptons?".
From Capital Spectator:

 
Book Bits | 28 May 2016

What They Do With Your Money:
How the Financial System Fails Us and How to Fix It

By Stephen Davis, et al.
Summary via publisher (Yale University Press)
Each year we pay billions in fees to those who run our financial system. The money comes from our bank accounts, our pensions, our borrowing, and often we aren’t told that the money has been taken. These billions may be justified if the finance industry does a good job, but as this book shows, it too often fails us. Financial institutions regularly place their business interests first, charging for advice that does nothing to improve performance, employing short-term buying strategies that are corrosive to building long-term value, and sometimes even concealing both their practices and their investment strategies from investors.
Only Humans Need Apply: Winners and Losers in the Age of Smart Machines
By Thomas H. Davenport and Julia Kirby
Review via FT
An IBM executive told a recent conference that when supercomputer Deep Blue was halfway through its 1997 chess match with Garry Kasparov, it made a random move, due to a software bug. Assuming the machine was smarter than it was, Kasparov later made a strategic error that helped hand Deep Blue victory in the match.
Thomas Davenport and Julia Kirby warn that humans could, like Kasparov, cede the future to machines too easily. “Many knowledge workers are fearful,” they write. “We should be concerned, given the potential for these unprecedented tools to make us redundant. But we should not feel helpless in the midst of the large-scale change unfolding around us.”
Who Needs the Fed?: What Taylor Swift, Uber, and Robots Tell Us About Money, Credit, and Why We Should Abolish America’s Central Bank
By John Tamny
Summary via publisher (Encounter Books)
The Federal Reserve is one of the most disliked entities in the United States at present, right alongside the IRS. Americans despise the Fed, but they’re also generally a bit confused as to why they distrust our central bank. Their animus is reasonable, though, because the Fed’s most famous function—targeting the Fed funds rate—is totally backwards. John Tamny explains this backwardness in terms of a Taylor Swift concert followed by a ride home with Uber.
Age of Discovery: Navigating the Risks and Rewards of Our New Renaissance
By Ian Goldin and Chris Kutarna
Summary via publisher (Bloomsbury)
Age of Discovery explores a world on the brink of a new Renaissance and asks: How do we share more widely the benefits of unprecedented progress? How do we endure the inevitable tumult generated by accelerating change? How do we each thrive through this tangled, uncertain time? In Age of Discovery, Ian Goldin and Chris Kutarna show how we can draw courage, wisdom and inspiration from the days of Michelangelo and Leonardo da Vinci in order to fashion our own Golden Age. – See more at: http://www.bloomsbury.com/uk/age-of-discovery-9781472936387/#sthash.5gFRhwhe.dpuf
...MORE

Tuesday, October 17, 2023

It's the "T" In ChatGPT

From The New Atlantis, Summer 2023 edition:

Why This AI Moment May Be the Real Deal
This time, believe the hype. 

For many years, those in the know in the tech world have known that “artificial intelligence” is a scam. It’s been true for so long in Silicon Valley that it was true before there even was a Silicon Valley. 

That’s not to say that AI hadn’t done impressive things, solved real problems, generated real wealth and worthy endowed professorships. But peek under the hood of Tesla’s “Autopilot” mode and you would find odd glitches, frustrated promise, and, well, still quite a lot of people hidden away in backrooms manually plugging gaps in the system, often in real time. Study Deep Blue’s 1997 defeat of world chess champion Garry Kasparov, and your excitement about how quickly this technology would take over other cognitive work would wane as you learned just how much brute human force went into fine-tuning the software specifically to beat Kasparov. Read press release after press release of Facebook, Twitter, and YouTube promising to use more machine learning to fight hate speech and save democracy — and then find out that the new thing was mostly a handmaid to armies of human grunts, and for many years relied on a technological paradigm that was decades old.

Call it AI’s man-behind-the-curtain effect: What appear at first to be dazzling new achievements in artificial intelligence routinely lose their luster and seem limited, one-off, jerry-rigged, with nothing all that impressive happening behind the scenes aside from sweat and tears, certainly nothing that deserves the name “intelligence” even by loose analogy.

So what’s different now? What follows in this essay is an attempt to contrast some of the most notable features of the new transformer paradigm (the T in ChatGPT) with what came before. It is an attempt to articulate why the new AIs that have garnered so much attention over the past year seem to defy some of the major lines of skepticism that have rightly applied to past eras — why this AI moment might, just might, be the real deal.

Men Behind Curtains

Artificial intelligence pioneer Joseph Weizenbaum originated the man-behind-the-curtain critique in his 1976 book Computer Power and Human Reason. Weizenbaum was the inventor of ELIZA, the world’s first chatbot. Imitating a psychotherapist who was just running through the motions to hit the one-hour mark, it worked by parroting people’s queries back at them: “I am sorry to hear you are depressed.” “Tell me more about your family.” But Weizenbaum was alarmed to find that users would ask to have privacy with the chatbot, and then spill their deepest secrets to it. They did this even when he told them that ELIZA did not understand them, that it was just a few hundred lines of dirt-stupid computer code. He spent the rest of his life warning of how susceptible the public was to believing that the lights were on and someone was home, even when no one was....

....MUCH MORE

Sunday, February 4, 2024

"Why This AI Moment May Be the Real Deal"

The author of this piece digs science.

From The New Atlantis, Summer 2023 edition: 

This time, believe the hype.

For many years, those in the know in the tech world have known that “artificial intelligence” is a scam. It’s been true for so long in Silicon Valley that it was true before there even was a Silicon Valley. 

That’s not to say that AI hadn’t done impressive things, solved real problems, generated real wealth and worthy endowed professorships. But peek under the hood of Tesla’s “Autopilot” mode and you would find odd glitches, frustrated promise, and, well, still quite a lot of people hidden away in backrooms manually plugging gaps in the system, often in real time. Study Deep Blue’s 1997 defeat of world chess champion Garry Kasparov, and your excitement about how quickly this technology would take over other cognitive work would wane as you learned just how much brute human force went into fine-tuning the software specifically to beat Kasparov. Read press release after press release of Facebook, Twitter, and YouTube promising to use more machine learning to fight hate speech and save democracy — and then find out that the new thing was mostly a handmaid to armies of human grunts, and for many years relied on a technological paradigm that was decades old.

Call it AI’s man-behind-the-curtain effect: What appear at first to be dazzling new achievements in artificial intelligence routinely lose their luster and seem limited, one-off, jerry-rigged, with nothing all that impressive happening behind the scenes aside from sweat and tears, certainly nothing that deserves the name “intelligence” even by loose analogy.

So what’s different now? What follows in this essay is an attempt to contrast some of the most notable features of the new transformer paradigm (the T in ChatGPT) with what came before. It is an attempt to articulate why the new AIs that have garnered so much attention over the past year seem to defy some of the major lines of skepticism that have rightly applied to past eras — why this AI moment might, just might, be the real deal.

Men Behind Curtains
Artificial intelligence pioneer Joseph Weizenbaum originated the man-behind-the-curtain critique in his 1976 book Computer Power and Human Reason. Weizenbaum was the inventor of ELIZA, the world’s first chatbot. Imitating a psychotherapist who was just running through the motions to hit the one-hour mark, it worked by parroting people’s queries back at them: “I am sorry to hear you are depressed.” “Tell me more about your family.” But Weizenbaum was alarmed to find that users would ask to have privacy with the chatbot, and then spill their deepest secrets to it. They did this even when he told them that ELIZA did not understand them, that it was just a few hundred lines of dirt-stupid computer code. He spent the rest of his life warning of how susceptible the public was to believing that the lights were on and someone was home, even when no one was.

I experienced this effect firsthand as a computer science student at the University of Texas at Austin in the 2000s, even though the field by this time was nominally much more advanced. Everything in our studies seemed to point us toward the semester where we would qualify for the Artificial Intelligence course. Sure, you knew that nothing like HAL 9000 existed yet. But the building blocks of intelligence, you understood, had been cracked — it was right there in the course title.

When Alan Turing and Claude Shannon and John von Neumann were shaping the building blocks of computing in the 1940s, the words “computer science” would have seemed aspirational too — just like “artificial intelligence,” nothing then was really worthy of that name. But in due time these blocks were arranged into a marvelous edifice. So there was a titter surrounding the course: Someone someday would do the same for AI, and maybe, just maybe, it would be you.

The reality was different. The state of the art at the time was neural nets, and it had been for twenty or thirty years. Neural nets were good at solving some basic pattern-matching problems. For an app I was building to let students plan out their course schedules, I used neural nets to match a list of textbook titles and author names to their corresponding entries on Amazon. This allowed my site to make a few bucks through referral fees, an outcome that would have been impossible for a college-student side hustle if not for AI research. So it worked — mostly, narrowly, sort of — but it was brittle: Adjust the neural net to resolve one set of false matches and you would create three more. It could be tuned, but it had no responsiveness, no real grasp. That’s it?, you had to think. There was no way, however many “neurons” you added to the net, however much computing power you gave it, that you could imagine arranging these building blocks into any grand edifice. And so the more impressed people sounded when you mentioned using this technology, the more cynicism you had to adopt about the entire enterprise.

All of this is to say that skepticism about the new AI moment we are in rests on very solid ground.....

....MUCH MORE

Of that trinity of raw intellectual achievement the brightest of the three was probably Shannon. I say that while acknowledging Turing's brilliance and having posted stuff like "The Word Genius Is Often Overused But....John von Neumann and Pretty Much Everything".  

Shannon made major contributions in at least three—and maybe as many as five—different fields of intellectual endeavor. If interested see:

The Bit Bomb: The True Nature of Information

The subject of this article, Claude Shannon has a couple interesting connections to finance/investing/trading beyond 'just' creating information theory (along with MIT's Norbert Wiener who was coming in on a different angle of attack), more after the jump.
Both Aeon and Climateer are reposting, "The Bit Bomb" first appeared at Aeon on August 30, 2017 and graced our pages over the Labor Day weekend, September 3, 2017.

"Claude Shannon, the Las Vegas Shark"
"How Information Got Re-Invented"
"How Claude Shannon Helped Kick-start Machine Learning"
"How Claude Shannon Invented the Future"
In last week's link to Quanta Magazine's "Maxwell’s Demon And The Physics Of Information." I went off on a Claude Shannon linkfest tangent and completely forgot to link Quanta's own post on the guy.

For more on von Neumann (and students of Kobayashi Maru scenarios) we have "The Curse of Game Theory: Why It’s in Your Self-Interest to Exit the Rules of the Game"

Finally, Shannon's second wife, Betty, was Claude's collaborator and went deep into some very fancy math and science, right there with him. I should probably do a post on her.

Thursday, September 29, 2016

"The Challenges And Triumphs Of Expanding A Family-Owned Winery"

Tell me about it.*
From Forbes:
Much of the discussion about Napa Valley tends to focus on bigger producers—or, at the very least, the ones with the greatest name recognition. But in a region as important and unexpectedly diverse as Napa, there are of course countless smaller producers whose wines deserve to be tasted more broadly, and whose stories reveal a very different side to the reputation and stereotypes of one of the world’s most famous, and important, wine-growing regions. 
Taylor Family Vineyards embodies not just the current state of winemaking in Napa Valley, but the region’s history, too. In that way, as well as the various challenges and triumphs they are currently experiencing, the Taylor story sheds an eye-opening light on a region that, for all its renown and glamour, is often not all that well understood beyond its glossy surface. 
I first became aware of Taylor Family Vineyards nearly six years ago, when I received a mixed case of samples from Stags Leap District in preparation for a story I was writing on the AVA (American Viticultural Area). Stags Leap is among the most well-known of the AVAs within Napa Valley, and producers like Shafer Vineyards, Cliff Lede, Stag’s Leap Wine Cellars, Stags’ Leap Winery, Pine Ridge, Chimney Rock, and more have garnered high praise for decades. And while I was impressed with the range and high quality of all the wines in that sample case, Taylor Family Vineyards was one of the standouts–deeply expressive, soulful, and with lots of potential for aging in the cellar. Since then, I have followed their evolution with interest, and have been drinking their wines with more regularity than I would have expected. (My father, a wine collector, joined Taylor’s mailing list after tasting the remnant of my sample bottle back in 2010, and their Cabernet Sauvignons and Chardonnays are often our wine of choice to toast family birthdays, anniversaries, and other notable occasions.)...MORE
*Just kidding, despite making installment payments that could have bought France, I don't own a vineyard.
I just wanted a chance to reprise one of the all-time greatest retweet comments, this one from former chess World Champion Garry Kasparov in response to The Onion:



Saturday, April 9, 2011

Man vs. Machine on Wall Street: How Computers Beat the Market (and should they be banned?)

A couple more High Frequency Trading links. First up The Atlantic:

Wall Street, meet your post-human future. Uber-"quant" Cliff Asness bets that his high-speed computers and trading models can churn billions of dollars in profits in booms and busts alike. But can artificial intelligence really out-smart the market? 
With the winter's second blizzard raging outside, Cliff Asness sat in his relatively modest office in Greenwich, Connecticut, surrounded by three of his partners, his PR guru, an impressive collection of unread books, and a sea of foot-tall hard-plastic replicas of Spiderman, the Incredible Hulk, and friends. "Let me be technical," he said. "It all sucked."

Asness--intense, bald, and bearded, with a $500 million fortune and a doctorate in finance--was reflecting on the dark days of 2008, when capitalism seemed to be imploding, when Bear Stearns and Lehman Brothers had collapsed and the government had hastily arranged bailouts of Merrill Lynch, Morgan Stanley, Goldman Sachs, and AIG, among others.

His own business, Applied Quantitative Research--one of the world's leading quantitative-investment, or "quant," funds--had also suffered painfully. The money his team managed fell to $17.2 billion in March 2009, from a peak of $39.1 billion in September 2007, as clients headed for the exits with what was left of their cash.
How Asness's firm lost $17 billion in two years—and made it up in one.
Such losses can be fatal for fund managers like AQR, since sophisticated investors pay them big fees for exceptional performance and, understandably, have little patience for anything less. As AQR's founders felt the tremors from Wall Street rippling through their offices, Asness said, "we worried about the stability of the financial sector, the stability of the economy, and the stability of society." To Bloomberg Markets magazine, last fall, he was even more explicit: "I heard the Valkyries circling. I saw the Grim Reaper at my door."

Yet they survived. And AQR--which makes its fortune, like other quants, by using high-speed computers and financial models of extraordinary complexity--has made a stupendous recovery in the past two years. At the end of 2010, AQR had $33 billion in assets under management. Its funds' performance was up nearly 20 percent last year, after being up 38 percent in 2009.

This is all the more striking because many analysts believe the quants helped cause, or at least exacerbated, the meltdown by giving traders a false sense of security. The risk-control models these firms pioneered encouraged Wall Street to take on excessive leverage. Their trading strategies, which deliver excellent returns in normal times, functioned poorly in the irrationality of a financial panic, and reinforced a frenzy of selling. Although predictions of the death of AQR and its ilk, by the writer and investor Nassim Taleb, among others, turned out to have been greatly exaggerated, worries linger, even as some high-profile quants have surged back. Taleb and the other critics think their overreliance on computers gives quants excessive confidence and blinds them to the possibility of seemingly rare economic catastrophes--which seem to be not so rare these days. (This was the theme of Taleb's best-selling book, The Black Swan, which examined the effect of the "highly 
improbable" on markets, and on life.) As Exhibit A, they point to the extraordinary events of May 6, 2010, when the Dow dropped by nearly 1,000 points in a few minutes after an algorithmic program executed by the investment firm Waddell & Reed, in Kansas, triggered a terrifying blitz of automated buying and selling by other financial computers. The market quickly recovered, but many worry that the episode was a preview of greater turbulence ahead as machines gain control of more and more trading.
Envision a world where artificial intelligence could vanquish human trading altogether

Scott Patterson, a former Wall Street Journal reporter and the author of the 2010 book The Quants, told me he can envision a world, not too far away, in which artificial intelligence could vanquish human trading altogether, just as it has Garry Kasparov on the chessboard. "I'm not totally against quants at all, because I think they are a very powerful way of investing," Patterson said. But, like a number of other critics, he thinks they might encourage a cycle of booms and busts, and possibly intensify the next crisis. "Go to a trading room, it's just guys on computers," he said. "And a lot of times it's not even guys, it's just the computer running the machine. I don't want to demonize it. I think there has to be a happy medium. But I'm personally worried that it can run off the rails."

As much as anyone else, Cliff Asness has shaped and embodied this world of automated high finance. And though his experience--from academia to Wall Street to Greenwich--has been marked by recurring crises, and though he admits that no one can predict when the next big one will hit, he's more confident than ever in the power of data and mathematical models, in his hands, to beat the market consistently over the long term. And, once again, the data are telling him he's right.

Next page: Learn what Asness calls "the only free lunch in finance"
And from Freakonomics:

Should High-Frequency Trading Be Banned? One Nobel Winner Thinks So
The IMF recently held a conference entitled Macro and Growth Policies in the Wake of the Crisis. Here’s a video summary from Michael Spence, former Stanford School of Business dean and winner of the 2001 Nobel Memorial Prize in Economic Sciences. It includes Spence’s thoughts about inflation and the coming divergence between growth and employment in the developed world...MORE, including the vid.
HT on both: Simoleon Sense's:
Weekly Roundup 122: A Curated Linkfest For The Smartest People On The Web

Tuesday, January 20, 2015

"‘Soft’ Artificial Intelligence Is Suddenly Everywhere"

Irving Wladawsky-Berger writing at the Wall Street Journal:
Michael Nagle/Bloomberg News
Attendees tour the International Business Machines Corp. Watson immersion room in New York City, Oct. 7, 2014.
“Artificial intelligence is suddenly everywhere. It’s still what the experts call soft A.I., but it is proliferating like mad.”  So starts an excellent Vanity Fair article, Enthusiasts and Skeptics Debate Artificial Intelligence, by author and radio host Kurt AndersenArtificial intelligence is indeed everywhere, but these days, the term is used in so many different ways that it’s almost like saying that computers are now everywhere. It’s true, but so general a statement that we must probe a bit deeper to understand its implications, starting with what is meant by soft AI, versus its counterpart, strong AI.

Soft, weak or narrow AI is inspired by, but doesn’t aim to mimic, the human brain. These are generally statistically oriented, computational intelligence methods for addressing complex problems based on the analysis of vast amounts of information using powerful computers and sophisticated algorithms, whose results exhibit qualities we tend to associate with human intelligence.

Soft AI was behind Deep Blue, IBM Corp.IBM -1.31%’s chess playing supercomputer, which in 1997 won a celebrated chess match against then reigning champion Gary Kasparov, as well as Watson, IBM’s question-answering system, which in 2011 won the Jeopardy! Challenge against the two best human Jeopardy! players. And, as Mr. Andersen notes in his article, it’s why “We’re now accustomed to having conversations with computers: to refill a prescription, make a cable-TV-service appointment, cancel an airline reservation – or, when driving, to silently obey the instructions of the voice from the G.P.S.”
This engineering-oriented AI is indeed everywhere, and being increasingly applied to activities requiring intelligence and cognitive capabilities that not long ago were viewed as the exclusive domain of humans. AI-based tools are enhancing our own cognitive powers, helping us process vast amounts of information and make ever more complex decisions.

Soft AI was nicely discussed in a recent Wired article, The Three Breakthroughs That Have Finally Unleashed AI on the World, by author and publisher Kevin Kelly, who called it a kind of “cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off.”...MORE

Monday, October 24, 2016

Irving Wladawsky-Berger On Artificial Intelligence

We haven't visited Irving in a while, here's his latest:

After many years of promise and hype, AI seems to be finally reaching a tipping point of market acceptance.  “Artificial intelligence is suddenly everywhere… it is proliferating like mad.”  So starts a Vanity Fair article published around two years ago by author and radio host Kurt Andersen.  And, this past June, a panel of global experts convened by the World Economic Forum (WEF) named Artificial Intelligence, - Open AI Ecosystems in particular - as one of its Top Ten Emerging Technologies for 2016 because of its potential to fundamentally change the way markets, business and governments work.  

AI is now being applied to activities that not long ago were viewed as the exclusive domain of humans.  “We’re now accustomed to having conversations with computers: to refill a prescription, make a cable-TV-service appointment, cancel an airline reservation - or, when driving, to silently obey the instructions of the voice from the G.P.S,” wrote Andersen.  The WEF report noted that “over the past several years, several pieces of emerging technology have linked together in ways that make it easier to build far more powerful, human-like digital assistants.”

What will life be like in such an AI-based society?  What impact is it likely to have on jobs, companies and industries?  How might it change our everyday lives?

These questions were addressed in Artificial Intelligence and Life in 2030, a report that was recently published by Stanford University’s One Hundred Year Study of AI (AI100).  AI100 was launched in December, 2014 “to study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play.”  The core activity of AI100 is to convene a Study Panel every five years to assess the then current state of the field, review AI’s progress in the years preceding the report, and explore the potential advances that lie ahead as well the technical and societal challenges and opportunities these advances might raise.

The first such Study Panel, launched a year ago, was comprised of AI experts from academia, corporate laboratories and industry as well as AI-savvy scholars in law, political science, policy, and economics.  The study’s overriding theme was the likely impact of AI on a typical North American city by the year 2030.  The panel examined key AI research trends, AI’s impact on various sectors of the economy, and major issues concerning AI public policy.  The report’s Executive Summary succinctly summarized its key finding:


“Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind.  No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future.  Instead, increasingly useful applications of AI, with potentially profound positive impacts on our society and economy are likely to emerge between now and 2030, the period this report considers.  At the same time, many of these developments will spur disruptions in how human labor is augmented or replaced by AI, creating new challenges for the economy and society more broadly.”
The report’s first section addresses a very important question: How do researchers and practitioners define Artificial Intelligence?  

From its inception about sixty years ago, there has never been a precisely, universally accepted definition of AI.  Rather, the field has been guided by a rough sense of direction, such as this one by Stanford professor Nils Nilsson in The Quest for Artificial Intelligence: “Artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.

Such a characterization of AI depends of what we mean for a machine to function appropriately and with foresight.  It spans a very wide spectrum, - as it should.  Is a simple calculator intelligent because it does math much faster than the human brain?  Where in the spectrum do we place thermostats, cruise-control in cars, navigation applications that give us detail directions, speech recognition, and chess and Go-playing apps?

Over the past six decades, the frontier of what we’re willing to call AI has kept moving forward.  AI suffers from what’s become known as the AI effect: AI is whatever hasn’t been done yet, and as soon as an AI problem is successfully solved, the problem is no longer considered part of AI.  “The same pattern will continue in the future,” notes the report. “ AI does not deliver a life-changing product as a bolt from the blue.  Rather, AI technologies continue to get better in a continual, incremental way.”
One of the key ways of assessing progress in AI is to compare it to human intelligence.  Any activity that computers are now able to perform that was once the exclusive domain of humans could be counted as an AI advance.  And, one of the best ways of comparing AI to humans is to pit them against each other in a competitive game.

Chess was one of the earliest AI challenges.  Many AI leaders were then convinced that it was just a matter of time before AI would consistently beat humans at chess.  They were trying to do so by somehow programming the machines to play chess, even though to this day we don’t really understand how chess champions think, let alone how to translate their thought patterns into a set of instructions that would enable a machine to play expert chess.  All these ambitious AI approaches met with disappointment and were abandoned in the 1980s, when after years of unfulfilled promises a so called AI winter of reduced interest and funding set in that nearly killed the field.

AI was reborn in the 1990s.  Instead of trying to program computers to act intelligently, the field embraced a statistical, brute force approach based on analyzing vast amounts of information with powerful computers and sophisticated algorithms.   AI researchers discovered that such an information-based approach produced something akin to intelligence or knowledge.  Moreover, unlike the earlier programming-based projects, the statistical approaches scaled very nicely.  The more information you had, the more powerful the supercomputers, the more sophisticated the algorithms, the better the results.

Deep Blue, IBM’s chess playing supercomputer, demonstrated the power of such a statistical, brute force approach by defeating then reigning chess champion Gary Kasparov in a celebrated match in May, 1997.  “Curiously, no sooner had AI caught up with its elusive target than Deep Blue was portrayed as a collection of brute force methods that wasn’t real intelligence… Was Deep Blue intelligent or not?  Once again, the frontier had moved.”  Now, the best chess programs consistently beat the strongest human players, and even smartphone-based apps play a strong game of chess.

As human-computer chess matches no longer attract much interest, the AI frontier has moved to games considerably more complex than chess.  In 2011, Watson, - IBM’s question-answering system, - won the Jeopardy! Challenge against the two best human Jeopardy! players, demonstrating that computers could now extract meaning from the unstructured knowledge embodied in books, articles, newspapers, web sites, social media, and anything written in natural language.  And earlier this year, Google’s AlphaGo claimed victory against Lee Sedol, - one of the world’s top Go players, - in a best-of-five match, winning four games and losing only one.  In the game of Go, there are more possible board positions than there are particles in the universe.  A Go-playing system cannot simply rely on computational brute force.  AlphaGo relies instead on deep learning algorithms, modeled partly on the way the human brain works.

Given the broad, changing scope of the field, what then is Artificial Intelligence?  The AI100 Study Panel offers a circular, operational answer: AI is defined by what AI researchers do.  The report then lists the key AI research trends, that is, the  hot areas AI researchers are pursuing.  These include:...
...MORE