Thursday, October 19, 2017

"Google’s A.I. Has Made Some Pretty Huge Leaps This Week" (GOOG; NVDA)

The writer, Christina Bonnington gets it. This is a big deal.

One example of what this implies: the advantage Facebook had - their enormous cache of captioned pictures used for training their A.I. - is no longer as valuable as it was a month ago.

Another example: Google did this with their own chips, not AMD or Intel or NVIDIA.
The Tensor Processing Units are not as versatile as the offerings from the chip-makers but in this application seem to be at least equal if not superior.
As they say, ya snooze, ya lose.

From Slate:
When DeepMind’s AlphaGo artificial intelligence defeated Lee Sedol, the Korean Go champion, for the first time last year, it stunned the world. Many, including Sedol himself, didn’t expect an AI to have mastered the complicated board game, but it won four out of five matches—proving it could compete with the best human players. More than a year has passed, and today’s AlphaGo makes last year’s version seem positively quaint.
Google’s latest AI efforts push beyond the limitations of their human developers. Its artificial intelligence algorithms are teaching themselves how to code and how to play the intricate, yet easy-to-learn ancient board game Go.

This has been quite the week for the company. On Monday, researchers announced that Google’s project AutoML had successfully taught itself to program machine learning software on its own. While it’s limited to basic programming tasks, the code AutoML created was, in some cases, better than the code written by its human counterparts. In a program designed to identify objects in a picture, the AI-created algorithm achieved a 43 percent success rate at the task. The human-developed code, by comparison, only scored 39 percent on the task.

On Wednesday, in a paper published in the journal Nature, DeepMind researchers revealed another remarkable achievement. The newest version of its Go-playing algorithm, dubbed AlphaGo Zero, was not only better than the original AlphaGo, which defeated the world’s best human player in May. This version had taught itself how to play the game. All on its own, given only the basic rules of the game. (The original, by comparison, learned from a database of 100,000 Go games.) According to Google’s researchers, AlphaGo Zero has achieved superhuman-level performance: It won 100–0 against its champion predecessor, AlphaGo.

But DeepMind’s developments go beyond just playing a board game exceedingly well. There are important implications that could positively impact AI in the near future.
“By not using human data—by not using human expertise in any fashion—we’ve actually removed the constraints of human knowledge,” AlphaGo Zero’s lead programmer, David Silver, said at a press conference.

Until now, modern AIs have largely relied on learning from vast data sets. The bigger the data set, the better. What AlphaGo Zero and AutoML prove is that a successful AI doesn’t necessarily need those human-supplied data sets—it can teach itself.

This could be important in the face of our current consumer-facing AI mess. Written by human programmers and taught on human-supplied data, algorithms (such as the ones Google and Facebook use to suggest articles you should read) are subject to the same defects as their human overlords. Without that human interference and influence, future AI’s could be far superior to what we’re seeing employed in the wild today. A dataset can be flawed or skewed—for example, a facial recognition algorithm that has trouble with black faces because their white programmers didn’t feed it a diverse enough set of images. AI, teaching itself, wouldn’t inherently be sexist or racist, or suffer from those kinds of unconscious biases.
In the case of AlphaGo Zero, its reinforcement-based learning is also good news for the computational power of advanced AI networks. Early AlphaGo versions operated on 48 Google-built TPUs. AlphaGo Zero works on only four. It’s far more efficient and practical than its predecessors. Paired with AutoML’s ability to develop its own machine learning algorithms, this could seriously speed up the pace of DeepMind’s AI-related discoveries....MORE
Previously:
May 2016
Machine Learning: JP Morgan Compares Google's New Chip With NVIDIA's (GOOG; NVDA)
April 2017 
Watch Out NVIDIA: "Google Details Tensor Chip Powers" (GOOG; NVDA)
We've said NVIDIA probably has a couple year head start but this bears watching, so to speak....