So, Where Is High Performance Computing At Right Now? "Data-Hungry Algorithms and the Thirst for AI"
From HPCWire, March 29:
At Tabor Communications’ Leverage Big Data + EnterpriseHPC Summit
in Florida last week, esteemed HPC professional Jay Boisseau, chief HPC
technology strategist at Dell EMC, engaged the audience with his
presentation, “Big Computing, Big Data, Big Trends, Big Results.
Trends around big computing and big data are converging in powerful
ways, including the Internet of Things (IoT), artificial intelligence
(AI) and deep learning. Innovating and competing is now about big,
scalable computing and big, fast data analytics – and “those with the
tools and talent will reap the big rewards,” Boisseau expressed.
Prior to joining Dell EMC (then Dell Inc.) in 2014, Boisseau made his
mark as the founding director of the Texas Advanced Computing Center
(TACC). Under his leadership the site became a center of HPC innovation,
a legacy that continues today under Director Dan Stanzione.
“I’m an HPC person who’s fascinated by the possibilities of
augmenting intelligence with deep learning techniques; I’ve drunk the
‘deep learning Kool-Aid,’” Boisseau told the crowd of advanced computing
professionals.
AI as a field goes back to the 50s, Boisseau noted, but the current
proliferation of deep learning using deep neural networks has been made
possible by three advances: “One is that we actually have big data;
these deep learning algorithms are data hungry. Whereas we sometimes
lament the growth of our data sizes, these deep neural networks are
useless on small data. Use other techniques if you have small data, but
if you have massive data and you want to draw insights that you’re not
even sure how to formulate the hypothesis ahead of time, these neural
network based methods can be really really powerful.
“Parallelizing the deep learning algorithms was another one of the
advances, and having sufficiently powerful processors is another one,”
Boisseau said.
AI, big data, cloud and deep learning are all intertwined and they
are driving rapid expansion of the market for HPC-class
hardware. Boisseau mines for correlations with the aid of Google Trends;
the fun-to-play-with Google tool elucidates the contemporaneous rise of
big data, deep learning, and IoT. Boisseau goes a step a further
showing how Nvidia stock floats up on these tech trends.
The narrow point here is that deep learning/big data is an engine for
GPU sales; the larger point is that these multiple related trends are
driving silicon specialization and impacting market dynamics. As
Boisseau points out, we’re only at the beginning of this trend cluster
and we’re seeing silicon developed specifically for AI workloads as
hardware vendors compete to establish themselves as the incumbent in
this emerging field.
Another deep learning champion Nvidia CEO Jen Hsun Huang refers to machine learning as HPC’s consumer first killer app.
When Nvidia’s CUDA-based ecosystem for HPC application acceleration
launched in 2006, it kick started an era of heterogeneity in HPC (we’ll
give the IBM-Sony Cell BE processor some cred here too even if the processor design was an evolutionary dead end).
Fast forward to 2013-2014 and the emerging deep learning community
found a friend in GPUs. With Nvidia, they could get their foot in the DL
door with an economical gaming board and work their way up the chain to
server-class Tesla GPUs, for max bandwidth and FLOPS.
Optimizations for single-precision (32-bit) processing, and support
for half-precision (16-bit) on Nvidia’s newer GPUs, translates
into faster computation for most AI workloads, which unlike many
traditional HPC applications do not require full 64-bit precision. Intel
is incorporating variable precision compute into its next-gen Phi
product, the Knights Mill processor (due out this year).
Boisseau observed that starting about two decades ago HPC began the
swing towards commodity architectures, with the invention of
commodity-grade Beowulf clusters by Thomas Sterling in 1994. Benefiting
from PC-based economies of scale, these x86 server-based Linux clusters
became the dominant architecture in HPC. In turn, this spurred the
movement toward broader enterprise adoption of HPC.
Although Xeon-flavored x86 is something of a de facto standard in HPC
(with > 90 percent share), the pendulum appears headed back toward
greater specialization and greater “disaggregation of technology,” to
use a phrase offered by industry analyst Addison Snell
(CEO, Intersect360 Research). Examples include IBM’s OpenPower systems;
GPU-accelerated computing (and Power+GPU); ARM (now in server variants
with HPC optimizations); AMD’s Zen/Ryzen CPU; and Intel’s Xeon Phi line
(also its Altera FPGAs and imminent Xeon Skylake).
A major driver of all this: a gathering profusion of data.
“In short, HPC may be getting diverse again, but much of the forcing
function is big data,” Boisseau observed. “Very simply, we used to have
no digital data, then a trickle, but the ubiquity of computers, mobile
devices, sensors, instruments and user/producers has produced an
avalanche of data.”
Buzz terminology aside, big data is a fact of life now, “a forever
reality” and those who can use big data effectively (or just “data” if
the “big” tag drops off), will be in a position to out-compete, Boisseau
added....MUCH MORE
See also: "
Bernstein: 'AI, the data flood and the future of investment research?'"