Monday, April 22, 2019

Graphcore CEO Takes A Shot at NVIDIA, Talks His Book, Makes a Good Point (NVDA)

In November 2016 we headlined a post Artificial Intelligence: What Could Derail NVIDIA? A Lab in Shenzhen; A Basement in Moscow; An Office in Bristol (NVDA).
Bristol?...

Graphcore was the "Office in Bristol".
A year later: "Sequoia Backs Graphcore as the Future of Artificial Intelligence Processors" (NVDA; INTC):
Huh.
Sometimes you get lucky...


From EE Times:

GPUs Holding Back AI Innovation
GPUs are widely used to accelerate AI computing, but are the limitations of GPU technology slowing down innovation in the development of neural networks?
In a recent interview with EETimes (Graphcore CEO Touts 'Most Complex Processor' Ever), Nigel Toon, CEO of Graphcore, explained that while GPUs are good at running convolutional neural networks (CNNs), they are not suitable for running the more complex types of neural network needed for reinforcement learning and other futuristic techniques.

“A GPU is a pretty good solution — if all you’re doing is basic, feed-forward CNNs. The problem comes when you start to have more complex neural networks. Rather than just doing it a layer at a time, I want to be able to go through some layers then feed back, and I want to be able to store information on the side which I can use as context information as I look at the next data. And if my data is changing — so, rather than it being millions of static images that I can feed in in parallel — if it's video and I'm interested in sequential frames, it's much harder to feed that in in parallel and take advantage of the wide SIMD paths in a GPU,” he said.

Visionary Approval
As part of our longer conversation, Toon noted that Graphcore has captured the interest of such AI visionaries as Demis Hassabis, a founder of DeepMind, and the founders of OpenAI, including Ilya Sutskever, along with many other leading researchers in machine learning. Graphcore worked with these researchers to design the IPU architecture based on the kinds of problems that they want to solve.

“All the innovators we spoke to said [using GPUs] is holding them back from new innovations,” he said. “If you look at the types of models that people are working on, they are primarily working on forms of convolutional neural networks because recurrent neural networks and other kinds of structures, [such as] reinforcement learning, don't map well to GPUs. Areas of research are being held back because there isn't a good enough hardware platform, and that's why we're trying to bring [IPUs] to market.”

Processor Development
Toon points to the development of ASIC-type accelerators that are built to accelerate specific neural networks, as well as increased interest in FPGA solutions for AI, as proof that GPUs can’t do the job well enough. There is a need, he says, for an easy-to-use processor that is designed from the ground up, specifically for machine intelligence.

“What you need to do is to extract parallelism in many different dimensions. So, rather than having an SIMD processor, what we need is a multiple-instruction, multiple-data machine,” he said. “We need to solve the problems of: ‘How can we access the memory in real time, during the compute?’ ‘How can I take pieces of data from here and there, gather that together, do the compute, and then scatter the answer back somewhere else?’ These are all the things that we have been solving with the IPU processor....MORE
If you follow the link there are three references to the entire interview. If you don't follow the link here's "Graphcore CEO Touts 'Most Complex Processor' Ever".

If interested see also:
Top 10 British Artificial Intelligence Startups

British AI Startup Graphcore Raises $200 Million From BMW, Microsoft

And related (and the reason the two types of chips are bolded above):
Chips: The Accelerator Wall—A New Problem for a Post-Moore’s Law World (GPU; ASIC; FPGA)
Watch Out Nvidia, Xilinx Is Performing (reports, beats, pops) XLNX; NVDA
Xilinx with their field-programmable gate array approach versus Nvidia's more generalist chips is an example of the type of competition experts were predicting NVDA would be facing, some over two years ago, links after the jump...
A Dip Into Chips: "AI Chip Architectures Race To The Edge"
"Why Alibaba is betting big on AI chips and quantum computing"
Hot Chips 2018 Symposium on High Performance Chips
Chips: "A Rogues Gallery of Post-Moore’s Law Options"
"Top-Rated Chipmaker Xilinx Gets Big Price-Target Hike On 5G Prospects" (XLNX; NVDA)