Tiernan Ray At Tech Trader Daily, May 30:
This morning I attended a most interesting conference on artificial intelligence at the Penn Club in midtown Manhattan. The meeting featured startups, as well as chip-equipment giant Applied Materials (AMAT), and it was hosted by Pierre Ferragu, who recently left his post as tech analyst at Bernstein, to take up a new role covering many areas of tech at New Street Partners, a research-only firm with no brokerage business. (My profile of Ferragu was published in April.)
The focus was very much on what kinds of chips will be used for “accelerating” articulated intelligence approaches such as “deep learning.”
And it was a good survey of the varying opinions of how dominant Intel (INTC), Nvidia (NVDA), and Xilinx (XLNX) will be in the future of computing.
The
speakers generally agreed that silicon for AI of one sort or another
will be a $15 billion market annually sometime in the next five years,
and that number, they agreed, may turn out to be too conservative. The
question is what kinds of chips will those be?
The most extreme view was offered by Niel Viljoen, who is founder and chief executive of one startup, Netronome, which is making a network-acceleration chip.
Viljoen’s
view is that Intel’s microprocessors will be more and more a minority
of computing as new kinds of chips take over. “The CPU will be less
involved with every bite,” said Viljoen.
“The
architectures” of chips, he said, “will be about fabrics and the
accelerators, and something we didn't touch on, memory, how it is
organized and how that distribution happens” for memory chips such as
DRAM.
Although Nvidia’s prominence is recognized in AI, with its
GPUs, the speakers were pointing to a future in which many kinds of
chips will be used, including very novel types that focus more heavily
on memory circuitry rather than just compute circuitry. That view was
reinforced by another startup, Mipsology, which is making software to
better use “field-programmable gate arrays,” the programmable chips sold
by Intel and Xilinx and Lattice Semiconductor (LSCC).
Mipsology’s CEO, Ludovic Larzul, explained that some
tasks in deep learning require traditional “branching” instructions,
computer commands that follow a more narrow “if/then/else” structure,
which doesn’t sit well with GPUs.
Another startup presented the case for an entirely different kind of chip for AI. Syntiant, which just came out of “stealth mode” a couple weeks ago, and most of whose executives came from Broadcom (AVGO), is making a chip that is focused on memory, and that uses analog chip expertise more than digital-semiconductor approaches, more akin to what one finds in companies such as Analog Devices (ADI).
As CEO Kurt Busch
described the company's chips, “We are doing the computation in memory
[…] We are 100% eliminating that memory bandwidth bottleneck” in
conventional memory chips.
Syntiant focuses on using those memory chips for “edge”
devices, things like cars and smartwatches and other things with tight
power constraints and that can't always connect to cloud data centers
for machine learning.
Busch’s example of edge AI was traffic
cameras. “There are traffic cameras on every corner, and they are taking
all the license plates they see and sending those images to the cloud,”
he said. “Why not instead have the cloud tell the cameras we are
looking for this particular license plate, and just have the cameras
that see that license plate send their images to the cloud....MUCH MORE