Showing posts sorted by relevance for query piz daint. Sort by date Show all posts
Showing posts sorted by relevance for query piz daint. Sort by date Show all posts

Monday, October 14, 2024

Switzerland's Best: Piz Daint has now been replaced by the Alps Supercomputer

The Piz Daint supercomputer holds a special place in our memories. Besides being the computer the Swiss from time to time loaned to CERN, its speed was later accelerated by the use of Nvidia GPUs. Here's a quick note from April 2016:

CERN Will Be Using NVIDIA Graphics Processors to Accelerate Their Supercomputer (NVDA)

Our standard NVDA boilerplate: We don't do much with individual stocks on the blog but this one is special.

$36.28 last, passing the stock's old all time high from 2007, $36.00. [adjust for 40:1 stock splits, that's 90 cents on the current $139.00 stock. We had started touting it at $25.00/0.625 a year earlier]

In 2017 Oak Ridge National Laboratory is scheduled to complete their newest supercomputer powered by NVIDIA Graphics Processing Unit chips and retake the title of World's Fastest Computer for the United States.

In the meantime NVDA is powering AI deep learning and autonomous vehicles and virtual reality and some other stuff.....

And from Swissinfo, October 3:

Swiss Alps supercomputer to leverage AI for science

Switzerland has rebooted its supercomputer network back into the global premier league of data processing. But who gets to use the sixth most powerful supercomputer in the world and what does it hope to achieve? 

Mapping the universe, sorting health facts from conspiracy theories and more precise climate modelling are just some use cases for Switzerland’s Alps supercomputer. But there are no immediate plans to allow private companies to tap into its resources.

Switzerland’s previous supercomputer, Piz Daint, has been crunching numbers for scientific research projects since 2013. It has served, among others, the Swiss meteorological service, the federal materials testing institute and the Paul Scherrer Institute of engineering sciences.

Piz Daint has now been replaced by the Alps Supercomputer, which will have 20 times the computing power of its predecessor when fully operational and the muscle to exploit the potential of artificial intelligence (AI).

It’s also the world’s sixth most powerful computer, with only the United States, Finland and Japan having more powerful machines. This has restored Switzerland’s supercomputing capabilities compared to other countries, which had been lost when Piz Daint was overtaken by more powerful machines around the world.

Access limited to science projects
But this does not mean 20 times more researchers will have access to the powerful computing network, which stretches over three sites in Switzerland and one in Italy. Some 1,800 researchers took advantage of the Piz Daint supercomputer and, so far, 1,000 have signed up to the new Alps network.

“We cannot serve a million researchers on this system,” Professor Thomas Schulthess, head of the Swiss National Supercomputing Centre (CSCS), told SWI swissinfo.ch. For a start, the CHF100 million ($118 million) supercomputer, with an annual operating budget pf CHF37 million, is funded out of the public purse. “We are a subsidised infrastructure, and subsidies don’t scale. We must be very disciplined in how the infrastructure is used,” said Schulthess....

....MUCH MORE

After the Piz Daint NVDA upgrade it went from 8th fastest to 3rd fastest computer in the world:
Supercomputers "The 49th TOP500 List was published June 20, 2017 in Frankfurt, Germany."

Saturday, July 22, 2017

Supercomputers "The 49th TOP500 List was published June 20, 2017 in Frankfurt, Germany."

We're usually more timely posting the list but reality keeps intruding on the blog stuff.
A couple things to point out, we've made a few mentions of the Swiss supercomputer Piz Daint, here's one of them from last November:
NVIDIA Builds Its Very Own Supercomputer, Enters The Top500 List At #28 (NVDA)
...To be clear, this isn't someone using NVDA's graphics processors to speed up their supercomputer as the Swiss did with the one they let CERN use and which is currently the eighth fastest in the world or the computer that's being built right now at Oak Ridge National Laboratory and is planned to be the fastest in the world (but may not make it, China's Sunway TaihuLight is very, very fast)....
You can see the results of the upgrade in the current list, Piz Daint went from 8th fastest to 3rd fastest in the world. 

Possibly also of interest, NVIDIA's 'puter has been bumped down to #32, behind Facebook's AI/machine-learning supercomputer which is based on NVIDIA's DGX-1 and uses NVDA chips as their graphics accelerator.
From Top 500.org:
June 2017
In the latest rankings, the Sunway TaihuLight, a system developed by China’s National Research Center of Parallel Computer Engineering & Technology (NRCPC) and installed at the National Supercomputing Center in Wuxi, maintains its top position. With a Linpack performance of 93 petaflops, TaihuLight is far and away the most powerful number-cruncher on the planet.

Tianhe-2, (Milky Way-2), a system developed by China’s National University of Defense Technology (NUDT) and deployed at the National Supercomputer Center in Guangzho, China, occupies the number two position with a Linpack mark of 33.9 petaflops.  Tianhe-2 was the number one system in the TOP500 list for three consecutive years, until TaihuLight eclipsed it in June 2016.

The new number three supercomputer is the upgraded Piz Daint, a Cray XC50 system installed at the Swiss National Supercomputing Centre (CSCS). The upgrade was accomplished with additional NVIDIA Tesla P100 GPUs, doubling the Linpack performance of the system’s previous mark of 9.8 petaflops in November 2016, which itself was the result of a significant upgrade. Piz Daint’s current Linpack result of 19.6 petaflops enabled the system to climb five positions in the rankings.

As a result of the Piz Daint upgrade, Titan, a Cray XK7 system installed at the Department of Energy’s (DOE) Oak Ridge National Laboratory, drops to number four in the rankings. Its Linpack mark of 17.6 petaflops has remained constant since it was installed in 2012.
Rounding out the top 10 are:
  • Sequoia (17.2 petaflops), an IBM BlueGene/Q system installed at the DOE’s Lawrence Livermore National Laboratory, at number five;
  • Cori (14.0 petaflops), a Cray XC40 system housed at the National Energy Research Scientific Computing Center (NERSC), at number six;
  • Oakforest-PACS (13.6 petaflops), a Fujitsu PRIMERGY system running at Japan’s Joint Center for Advanced High Performance Computing, at number seven;
  • Fujitsu’s K computer (10.5 petaflops), installed at the RIKEN Advanced Institute for Computational Science (AICS), at number eight;
  • Mira (8,6 petaflops), an IBM BlueGene/Q system installed at DOE’s Argonne National Laboratory, at number nine; and
  • Trinity (8.1 petaflops), a Cray XC40 system running at Los Alamos National Laboratory, at number ten.
With the two Chinese supercomputers and one Swiss system occupying the top of the rankings, this is the second time in the 24-year history of the TOP500 list that the United States has failed to secure any of the top three positions. The only other time this occurred was in November 1996, when three Japanese systems captured the top three spots....MORE
TOP 10 Sites for June 2017
For more information about the sites and systems in the list, click on the links or view the complete list.
Rank System Cores Rmax (TFlop/s) Rpeak (TFlop/s) Power (kW)
1 Sunway TaihuLight - Sunway MPP, Sunway SW26010 260C 1.45GHz, Sunway , NRCPC
National Supercomputing Center in Wuxi
China
10,649,600 93,014.6 125,435.9 15,371
2 Tianhe-2 (MilkyWay-2) - TH-IVB-FEP Cluster, Intel Xeon E5-2692 12C 2.200GHz, TH Express-2, Intel Xeon Phi 31S1P , NUDT
National Super Computer Center in Guangzhou
China
3,120,000 33,862.7 54,902.4 17,808
3 Piz Daint - Cray XC50, Xeon E5-2690v3 12C 2.6GHz, Aries interconnect , NVIDIA Tesla P100 , Cray Inc.
Swiss National Supercomputing Centre (CSCS)
Switzerland
361,760 19,590.0 25,326.3 2,272
4 Titan - Cray XK7, Opteron 6274 16C 2.200GHz, Cray Gemini interconnect, NVIDIA K20x , Cray Inc.
DOE/SC/Oak Ridge National Laboratory
United States
560,640 17,590.0 27,112.5 8,209
5 Sequoia - BlueGene/Q, Power BQC 16C 1.60 GHz, Custom , IBM
DOE/NNSA/LLNL
United States
1,572,864 17,173.2 20,132.7 7,890
6 Cori - Cray XC40, Intel Xeon Phi 7250 68C 1.4GHz, Aries interconnect , Cray Inc.
DOE/SC/LBNL/NERSC
United States
622,336 14,014.7 27,880.7 3,939
7 Oakforest-PACS - PRIMERGY CX1640 M1, Intel Xeon Phi 7250 68C 1.4GHz, Intel Omni-Path , Fujitsu
Joint Center for Advanced High Performance Computing
Japan
556,104 13,554.6 24,913.5 2,719
8 K computer, SPARC64 VIIIfx 2.0GHz, Tofu interconnect , Fujitsu
RIKEN Advanced Institute for Computational Science (AICS)
Japan
705,024 10,510.0 11,280.4 12,660
9 Mira - BlueGene/Q, Power BQC 16C 1.60GHz, Custom , IBM
DOE/SC/Argonne National Laboratory
United States
786,432 8,586.6 10,066.3 3,945
10 Trinity - Cray XC40, Xeon E5-2698v3 16C 2.3GHz, Aries interconnect , Cray Inc.
DOE/NNSA/LANL/SNL
United States
301,056 8,100.9 11,078.9 4,233

Wednesday, April 13, 2016

CERN Will Be Using NVIDIA Graphics Processors to Accelerate Their Supercomputer (NVDA)

Our standard NVDA boilerplate: We don't do much with individual stocks on the blog but this one is special.
$36.28 last, passing the stock's old all time high from 2007, $36.00.

In 2017 Oak Ridge National Laboratory is scheduled to complete their newest supercomputer powered by NVIDIA Graphics Processing Unit chips and retake the title of World's Fastest Computer for the United States.

In the meantime NVDA is powering AI deep learning and autonomous vehicles and virtual reality and some other stuff.

NVDA NVIDIA Corporation daily Stock Chart
FinViz

From PC World:

Nvidia's screaming Tesla P100 GPU will power one of the world's fastest computers
The Piz Daint supercomputer in Switzerland will be used to analyze data from the Large Hadron Collider 
It didn’t take long for Nvidia’s monstrous Tesla P100 GPU to make its mark in an ongoing race to build the world’s fastest computers. 
Just a day after Nvidia’s CEO said he was “friggin’ excited” to introduce the Tesla P100, the company announced its fastest GPU ever would be used to upgrade a supercomputer called Piz Daint. Roughly 4,500 of the GPUs will be installed in the supercomputer at the Swiss National Supercomputing Center in Switzerland.Piz Daint already has a peak performance of 7.8 petaflops, making it the seventh-fastest computer in the world. The fastest in the world is the Tianhe-2 in China, which has a peak performance of 54.9 petaflops, according to the Top500 list released in November. 
Two of the world’s ten fastest computers use GPUs as co-processors to speed up simulations and scientific applications: Titan, at the U.S. Oak Ridge National Laboratory, and Piz Daint. The latter is used to analyze data from the Large Hadron Collider at CERN. 
Nvidia has already made a desktop-type supercomputer with the Tesla P100. The DGX-1can deliver 170 teraflops of performance, or 2 petaflops when several are installed on a rack. It has eight Tesla P100 GPUs, two Xeon CPUs, 7TB of solid-state-drive storage and dual 10-Gigabit ethernet ports. 
The GPU will also be in volume servers from IBM, Hewlett Packard Enterprise, Dell and Cray by the first quarter of next year. Huang said companies building mega data centers for the cloud will be using servers with Tesla P100s by the end of the year. 
The Tesla P100 is one of the largest chips ever made and may be one of the fastest. It has 150 billion transistors and packs many new GPU technologies that could give Piz Daint a serious boost in horsepower....MORE
Previously on the fanboi channel:
Jan. 2016
Class Act: Nvidia Will Be The Brains Of Your Autonomous Car (NVDA)
Stanford and Nvidia Team Up For Next Generation Virtual Reality Headsets (NVDA)
Quants: "Two Glenmede Funds Rely on Models to Pick Winners, Avoid Losers" (NVDA)
"NVIDIA: “Expensive and Worth It,” Says MKM Partners" (NVDA) 
May 2015
Nvidia Wants to Be the Brains Of Your Autonomous Car (NVDA)

Thursday, March 7, 2024

"Making GenAI more efficient with a new kind of chip" (plus some of our history with Nvidia)

The writer of this piece, Tiernan Ray, took over the Tech Trader column at Barron's from Eric Savitz (before Mr. Savitz returned to the Dow Jones empire).

Despite having a very different style, far less frenetic/borderline manic than Savitz, Mr. Ray won me over by getting Nvidia's Jensen Huang to talk about things that none of the other tech writers seemed to even be aware of.

Here he is at ZD Net, March 7, once again far ahead of the field and on an issue that is a bit of a weak spot for NVDA:

EnCharge AI's breakthrough in melding analog and digital computing could dramatically improve the energy consumption of generative AI when performing predictions. 

2024 is expected to be the year that generative artificial intelligence (GenAI) goes into production, when enterprises and consumer electronics start actually using the technology to make predictions in heavy volume -- a process known as inference.

For that to happen, the very large, complex creations of OpenAI and Meta, such as ChatGPT and Llama, somehow have to be able to run in energy-constrained devices that consume far less power than the many kilowatts used in cloud data centers.

Also: 2024 may be the year AI learns in the palm of your hand

That inference challenge is inspiring fundamental research breakthroughs toward drastically more efficient electronics.

On Wednesday, semiconductor startup EnCharge AI announced that its partnership with Princeton University has received an $18.6 million grant from the US's Defense Advanced Research Projects Agency, DARPA, to advance novel kinds of low-power circuitry that could be used in inference.

"You're starting to deploy these models on a large scale in potentially energy-constrained environments and devices, and that's where we see some big opportunities," said EnCharge AI CEO and co-founder Naveen Verma, a professor in Princeton's Department of Electrical Engineering, in an interview with ZDNET. 

EnCharge AI, which employs 50, has raised $45 million to date from venture capital firms including VentureTech, RTX Ventures, Anzu Partners, and AlleyCorp. The company was founded based on work done by Verma and his team at Princeton over the past decade or so.

EnCharge AI is planning to sell its own accelerator chip and accompanying system boards for AI in "edge computing," including corporate data center racks, automobiles, and personal computers....

....MUCH MORE

A lot of people see the opportunity in the inference, rather than the training, end of things but inference at the edge could lead to the kind of serendipitous manufacturing—research—discovery feedback loop that Nvidia experienced when they were pushing the limits of using GPUs as accelerators for supercomputers in 2015 -2016.

That was when we really got interested in Nvidia, Here's a post from that period 
(unfortunately I used a dynamic rather than a static price chart so we get the last twelve months ending today rather than 2016. still gorgeous though):

Wednesday, April 13, 2016 
Our standard NVDA boilerplate: We don't do much with individual stocks on the blog but this one is special.
$36.28 last, passing the stock's old all time high from 2007, $36.00.

In 2017 Oak Ridge National Laboratory is scheduled to complete their newest supercomputer powered by NVIDIA Graphics Processing Unit chips and retake the title of World's Fastest Computer for the United States.

In the meantime NVDA is powering AI deep learning and autonomous vehicles and virtual reality and some other stuff.

NVDA NVIDIA Corporation daily Stock Chart

FinViz

From PC World:

 Nvidia's screaming Tesla P100 GPU will power one of the world's fastest computers

The Piz Daint supercomputer in Switzerland will be used to analyze data from the Large Hadron Collider

It didn’t take long for Nvidia’s monstrous Tesla P100 GPU to make its mark in an ongoing race to build the world’s fastest computers.

Just a day after Nvidia’s CEO said he was “friggin’ excited” to introduce the Tesla P100, the company announced its fastest GPU ever would be used to upgrade a supercomputer called Piz Daint. Roughly 4,500 of the GPUs will be installed in the supercomputer at the Swiss National Supercomputing Center in Switzerland.Piz Daint already has a peak performance of 7.8 petaflops, making it the seventh-fastest computer in the world. The fastest in the world is the Tianhe-2 in China, which has a peak performance of 54.9 petaflops, according to the Top500 list released in November.

Two of the world’s ten fastest computers use GPUs as co-processors to speed up simulations and scientific applications: Titan, at the U.S. Oak Ridge National Laboratory, and Piz Daint. The latter is used to analyze data from the Large Hadron Collider at CERN.

Nvidia has already made a desktop-type supercomputer with the Tesla P100. The DGX-1can deliver 170 teraflops of performance, or 2 petaflops when several are installed on a rack. It has eight Tesla P100 GPUs, two Xeon CPUs, 7TB of solid-state-drive storage and dual 10-Gigabit ethernet ports.

The GPU will also be in volume servers from IBM, Hewlett Packard Enterprise, Dell and Cray by the first quarter of next year. Huang said companies building mega data centers for the cloud will be using servers with Tesla P100s by the end of the year.

The Tesla P100 is one of the largest chips ever made and may be one of the fastest. It has 150 billion transistors and packs many new GPU technologies that could give Piz Daint a serious boost in horsepower....MORE 

Previously on the fanboi channel:
Jan. 2016
Class Act: Nvidia Will Be The Brains Of Your Autonomous Car (NVDA)
Stanford and Nvidia Team Up For Next Generation Virtual Reality Headsets (NVDA)
Quants: "Two Glenmede Funds Rely on Models to Pick Winners, Avoid Losers" (NVDA)
"NVIDIA: “Expensive and Worth It,” Says MKM Partners" (NVDA) 
May 2015
Nvidia Wants to Be the Brains Of Your Autonomous Car (NVDA)

Here's another one from seven months later:

Wednesday, November 16, 2016
NVIDIA Builds Its Very Own Supercomputer, Enters The Top500 List At #28 (NVDA)

Now they're just showing off.

The computer isn't going to be a product line or anything that generates immediate revenues but it puts the company in a very exclusive club and may lead to some in-house breakthroughs in chip design going forward.
The stock is up $4.97 (+5.77%) at $91.16.

To be clear, this isn't someone using NVDA's graphics processors to speed up their supercomputer as the Swiss did with the one they let CERN use and which is currently the eighth fastest in the world or the computer that's being built right now at Oak Ridge National Laboratory and is planned to be the fastest in the world (but may not make it, China's Sunway TaihuLight is very, very fast).

And this isn't the DIY supercomputer we highlighted back in May 2015:

...Among the fastest processors in the business are the one's originally developed for video games and known as Graphics Processing Units or GPU's. Since Nvidia released their Tesla hardware in 2008 hobbyists (and others) have used GPU's to build personal supercomputers.
Here's Nvidias Build your Own page.
Or have your tech guy build one for you....
Nor is it the $130.000 supercomputer NVIDIA came up with for companies to get started in Deep Learning/AI.

No, this is NVIDIA's very own supercomputer.

Here's the brand new list (they come out every six months):
Top500 List - November 2016

And here is NVIDIA's 'puter, right behind one of the U.S. Army's machines and just ahead of Italian energy giant ENI's machine:

28

NVIDIA Corporation
United States
DGX SATURNV - NVIDIA DGX-1, Xeon E5-2698v4 20C 2.2GHz, Infiniband EDR, NVIDIA Tesla P100
Nvidia





Some of our prior posts on the Top500.

Here's NVDA's story, from Extreme Tech:
Nvidia builds its own supercomputer, claims top efficiency spot in the TOP500....

Wednesday, June 26, 2019

World's Fastest Supercomputers: "The 53rd TOP500 List Was Published in Frankfurt June 18" (NVDA)

Five of the ten fastest computers in the world are using NVIDIA GPU's as accelerators, clocking in at 1st, 2nd, 6th, 8th and 10th fastest.
From Top500:
June 2019
BERKELEY, Calif.; FRANKFURT, Germany; and KNOXVILLE, Tenn.— The 53rd edition of the TOP500 marks a milestone in the 26-year history of the list. For the first time, all 500 systems deliver a petaflop or more on the High Performance Linpack (HPL) benchmark, with the entry level to the list now at 1.022 petaflops.
Top 10 rundown
The top of the list remains largely unchanged, with only two new entries in the top 10, one of which was an existing system that was upgraded with additional capacity.

Two IBM-built supercomputers, Summit and Sierra, installed at the Department of Energy’s Oak Ridge National Laboratory (ORNL) in Tennessee and Lawrence Livermore National Laboratory in California, respectively, retain the first two positions on the list. Both derive their computational power from Power 9 CPUs and NVIDIA V100 GPUs. The Summit system slightly improved its HPL result from six months ago, delivering a record 148.6 petaflops, while the number two Sierra system remains unchanged at 94.6 petaflops.

The Sunway TaihuLight, a system developed by China’s National Research Center of Parallel Computer Engineering & Technology (NRCPC) and installed at the National Supercomputing Center in Wuxi, holds the number three position with 93.0 petaflops. It’s powered by more than 10 million SW26010 processor cores.

At number four is the Tianhe-2A (Milky Way-2A) supercomputer, developed by China’s National University of Defense Technology (NUDT) and deployed at the National Supercomputer Center in Guangzhou. It used a combination of Intel Xeon and Matrix-2000 processors to achieve an HPL result of 61.4 petaflops.

Frontera, the only new supercomputer in the top 10, attained its number five ranking by delivering 23.5 petaflops on HPL. The Dell C6420 system, powered by Intel Xeon Platinum 8280 processors, is installed at the Texas Advanced Computing Center of the University of Texas.

At number six is Piz Daint, a Cray XC50 system installed at the Swiss National Supercomputing Centre (CSCS) in Lugano, Switzerland. It’s equipped with Intel Xeon CPUs and NVIDIA P100 GPUs. Piz Daint remains the most powerful system in Europe.

Trinity, a Cray XC40 system operated by Los Alamos National Laboratory and Sandia National Laboratories improves its performance to 20.2 petaflops, which earns it the number seven position. It’s powered by Intel Xeon and Xeon Phi processors.

The AI Bridging Cloud Infrastructure (ABCI) is installed in Japan at the National Institute of Advanced Industrial Science and Technology (AIST) and is listed at number eight, delivering 19.9 petaflops. The Fujitsu-built system is equipped with Intel Xeon Gold processors and NVIDIA Tesla V100 GPUs.

SuperMUC-NG is in the number nine position with 19.5 petaflops. It’s installed at the Leibniz-Rechenzentrum (Leibniz Supercomputing Centre) in Garching, near Munich. The Lenovo-built machine is equipped with Intel Platinum Xeon processors, as well as the company’s Omni-Path interconnect.

The upgraded Lassen supercomputer captures the number 10 spot, with an upgrade that boosted its original 15.4 petaflops result on HPL to 18.2 petaflops. Installed at Lawrence Livermore National Laboratory, Lassen is the unclassified counterpart to the classified Sierra system and shares the same IBM Power9/NVIDIA V100 GPU architecture.

TOP 10 Sites for June 2019
For more information about the sites and systems in the list, click on the links or view the complete list.
....MUCH MORE

Friday, November 17, 2017

The Fiftieth TOP500 List of the Fastest Supercomputers in the World

We'll be back with more commentary on what's up in High Performance Computing but as a placeholder here's Top500 with the highest of the HPC crowd:

November 2017 
The fiftieth TOP500 list of the fastest supercomputers in the world has China overtaking the US in the total number of ranked systems by a margin of 202 to 143. It is the largest number of supercomputers China has ever claimed on the TOP500 ranking, with the US presence shrinking to its lowest level since the list’s inception 25 years ago.

Just six months ago, the US led with 169 systems, with China coming in at 160. Despite the reversal of fortunes, the 143 systems claimed by the US gives them a solid second place finish, with Japan in third place with 35, followed by Germany with 20, France with 18, and the UK with 15.

China has also overtaken the US in aggregate performance as well. The Asian superpower now claims 35.4 percent of the TOP500 flops, with the US in second place with 29.6 percent.
The top 10 systems remain largely unchanged since the June 2017 list, with a couple of notable exceptions.

Sunway TaihuLight, a system developed by China’s National Research Center of Parallel Computer Engineering & Technology (NRCPC), and installed at the National Supercomputing Center in Wuxi, maintains its number one ranking for the fourth time, with a High Performance Linpack (HPL) mark of 93.01 petaflops.

Tianhe-2 (Milky Way-2), a system developed by China’s National University of Defense Technology (NUDT) and deployed at the National Supercomputer Center in Guangzho, China, is still the number two system at 33.86 petaflops.

Piz Daint, a Cray XC50 system installed at the Swiss National Supercomputing Centre (CSCS) in Lugano, Switzerland, maintains its number three position with 19.59 petaflops, reaffirming its status as the most powerful supercomputer in Europe. Piz Daint was upgraded last year with NVIDIA Tesla P100 GPUs, which more than doubled its HPL performance of 9.77 petaflops.

The new number four system is the upgraded Gyoukou supercomputer, a ZettaScaler-2.2 system deployed at Japan’s Agency for Marine-Earth Science and Technology, which was the home of the Earth Simulator. Gyoukou was able to achieve an HPL result of 19.14 petaflops. using PEZY-SC2 accelerators, along with conventional Intel Xeon processors. The system’s 19,860,000 cores represent the highest level of concurrency ever recorded on the TOP500 rankings of supercomputers.

Titan, a five-year-old Cray XK7 system installed at the Department of Energy’s (DOE) Oak Ridge National Laboratory, and still the largest system in the US, slips down to number five. Its 17.59 petaflops are mainly the result of its NVIDIA K20x GPU accelerators.

Sequoia, an IBM BlueGene/Q system installed at DOE’s Lawrence Livermore National Laboratory, is the number six system on the list with a mark of 17.17 petaflops. It was deployed in 2011.
The new number seven system is Trinity, a Cray XC40 supercomputer operated by Los Alamos National Laboratory and Sandia National Laboratories. It was recently upgraded with Intel “Knights Landing” Xeon Phi processors, which propelled it from 8.10 petaflops six months ago to its current high-water mark of 14.14 petaflops.

Cori, a Cray XC40 supercomputer, installed at the National Energy Research Scientific Computing Center (NERSC), is now the eighth fastest supercomputer in the world. Its 1,630 Intel Xeon "Haswell" processor nodes and 9,300 Intel Xeon Phi 7250 nodes yielded an HPL result of 14.01 petaflops.

At 13.55 petaflops, Oakforest-PACS, a Fujitsu PRIMERGY CX1640 M1 installed at Joint Center for Advanced High Performance Computing in Japan, is the number nine system. It too is powered by Intel “Knights Landing” Xeon Phi processors.

Fujitsu’s K computer installed at the RIKEN Advanced Institute for Computational Science (AICS) in Kobe, Japan, is now the number 10 system at 10.51 petaflops. Its performance is derived from its 88 thousand SPARC64 processor cores linked by Fujitsu’s Tofu interconnect. Despite its tenth-place showing on HPL, the K Computer is the top-ranked system on the High-Performance Conjugate Gradients (HPCG) benchmark....MUCH MORE, including links to all 500 of the fastest machines on earth.

Wednesday, July 1, 2020

I Have Dishobnored My Ancestors.... (Japan Dominates The World's Top Supercomputers) NVDA

....ZeroHedge beat me to the latest Top 500 Supercomputer list.
From ZH, June 28:
Japanese supercomputer Fugaku zipped past all competitors to claim the top spot in the twice-annual ranking of the world's most powerful computational machines released by research project Top500.
Infographic: The World's Top Supercomputers | Statista
You will find more infographics at Statista
Statista's Katharina Buchholz reports that Fugaku, which was developed by Fujitsu in cooperation with the federal Riken research lab, was able to perform almost three times as many computations per second as former leader of the list, U.S.-based supercomputer Summit.
Fugaku has not only topped the ranking in the number of computations per second - so-called TeraFLOPS - but in all four categories that supercomputers are judged on by the project. According to the Riken lab, no other computer had achieved this feat so far. Fugaku also had the most cores of all computers ranked, the highest theoretical peak performance for computations and the highest power capacity....MORE
Looking at the June 2019 list:
World's Fastest Supercomputers: "The 53rd TOP500 List Was Published in Frankfurt June 18" (NVDA) 
Five of the ten fastest computers in the world are using NVIDIA GPU's as accelerators, clocking in at 1st, 2nd, 6th, 8th and 10th fastest.

And this year's Top 500, June 2020, it looks like six of the top 10 fastest in the world 'puters are using NVDA GPU's as accelerators:
.
The 55th edition of the TOP500 saw some significant additions to the list, spearheaded by a new number one system from Japan. The latest rankings also reflect a steady growth in aggregate performance and power efficiency.
The new top system, Fugaku, turned in a High Performance Linpack (HPL) result of 415.5 petaflops, besting the now second-place Summit system by a factor of 2.8x.  Fugaku, is powered by Fujitsu’s 48-core A64FX SoC, becoming the first number one system on the list to be powered by ARM processors. In single or further reduced precision, which are often used in machine learning and AI applications, Fugaku’s peak performance is over 1,000 petaflops (1 exaflops). The new system is installed at RIKEN Center for Computational Science (R-CCS) in Kobe, Japan.

Number two on the list is Summit, an IBM-built supercomputer that delivers 148.8 petaflops on HPL. The system has 4,356 nodes, each equipped with two 22-core Power9 CPUs, and six NVIDIA Tesla V100 GPUs. The nodes are connected with a Mellanox dual-rail EDR InfiniBand network. Summit is running at Oak Ridge National Laboratory (ORNL) in Tennessee and remains the fastest supercomputer in the US.

At number three is Sierra, a system at the Lawrence Livermore National Laboratory (LLNL) in California achieving 94.6 petaflops on HPL. Its architecture is very similar to Summit, equipped with two Power9 CPUs and four NVIDIA Tesla V100 GPUs in each of its 4,320 nodes. Sierra employs the same Mellanox EDR InfiniBand as the system interconnect.

Sunway TaihuLight, a system developed by China’s National Research Center of Parallel Computer Engineering & Technology (NRCPC) drops to number four on the list. The system is powered entirely by Sunway 260-core SW26010 processors. Its HPL mark of 93 petaflops has remained unchanged since it was installed at the National Supercomputing Center in Wuxi, China in June 2016.
At number five is Tianhe-2A (Milky Way-2A), a system developed by China’s National University of Defense Technology (NUDT).  Its HPL performance of 61.4 petaflops is the result of a hybrid architecture employing Intel Xeon CPUs and custom-built Matrix-2000 coprocessors. It is deployed at the National Supercomputer Center in Guangzhou, China.

A new system on the list, HPC5, captured the number six spot, turning in an HPL performance of 35.5 petaflops.  HPC5 is a PowerEdge system built by Dell and installed by the Italian energy firm Eni S.p.A, making it the fastest supercomputer in Europe. It is powered by Intel Xeon Gold processors and NVIDIA Tesla V100 GPUs and uses Mellanox HDR InfiniBand as the system network.

Another new system, Selene, is in the number seven spot with an HPL mark of 27.58 petaflops. It is a DGX SuperPOD, powered by NVIDIA’s new “Ampere” A100 GPUs and AMD’s EPYC “Rome” CPUs. Selene is installed at NVIDIA in the US. It too uses Mellanox HDR InfiniBand as the system network.

Frontera, a Dell C6420 system installed at the Texas Advanced Computing Center (TACC) in the US is ranked eighth on the list. Its 23.5 HPL petaflops is achieved with 448,448 Intel Xeon cores. 
The second Italian system in the top 10 is Marconi-100, which is installed at the CINECA research center. It is powered by IBM Power9 processors and NVIDIA V100 GPUs, employing dual-rail Mellanox EDR InfiniBand as the system network. Marconi-100’s 21.6 petaflops earned it the number nine spot on the list.

Rounding out the top 10 is Piz Daint at 19.6 petaflops, a Cray XC50 system installed at the Swiss National Supercomputing Centre (CSCS) in Lugano, Switzerland. It is equipped with Intel Xeon processors and NVIDIA P100 GPUs.
....MUCH MUCH MORE

Piz Daint is the computer the Swiss let CERN use if they have a mind-bendingly complex operation they need to run
We mentioned their Nvidia upgrade in 2017's  "Supercomputers 'The 49th TOP500 List was published June 20, 2017 in Frankfurt, Germany.'" as well as Nvidia's own supercomputer which is now #23 in the world, moving up from #28.

Wednesday, September 13, 2017

"The Astonishing Engineering Behind America's Latest, Greatest Supercomputer"

From Wired:
If you want to do big, serious science, you’ll need a serious machine. You know, like a giant water-cooled computer that’s 200,000 times more powerful than a top-of-the-line laptop and that sucks up enough energy to power 12,000 homes.

You’ll need Summit, a supercomputer nearing completion at the Oak Ridge National Laboratory in Tennessee. When it opens for business next year, it'll be the United States’ most powerful supercomputer and perhaps the most powerful in the world. Because as science gets bigger, so too must its machines, requiring ever more awesome engineering, both for the computer itself and the building that has to house it without melting. Modeling the astounding number of variables that affect climate change, for instance, is no task for desktop computers in labs. Some goes for genomics work and drug discovery and materials science. If it’s wildly complex, it’ll soon course through Summit’s circuits.

Summit will be five to 10 times more powerful than its predecessor, Oak Ridge’s Titan supercomputer, which will continue running its science for about a year after Summit comes online. (Not that there's anything wrong with Titan. It's just that at 5 years old, the machine is getting on in years by supercomputer standards.) But it’ll be pieced together in much the same way: cabinet after cabinet of so-called nodes. While each node for Titan, all 18,688 of them, consists of one CPU and one GPU, with Summit it'll be two CPUs working with six GPUs.

Think of the GPU as a turbocharger for the CPU in this relationship. While not all supercomputers use this setup, known as a heterogeneous architecture, those that do get a boost―each of the 4,600 nodes in Summit can manage 40 teraflops. So at peak performance, Summit will hit 200 petaflops, a petaflop being one million billion operations a second. "So we envision research teams using all of those GPUs on every single node when they run, that's sort of our mission as a facility," says Stephen McNally, operations manager....MORE
What's truly astonishing is that:

a) Wired does not mention the graphics processing units are from NVIDIA
b) the Chinese may have taken an insurmountable lead in the need-for-speed derby and the ORNL computer, designed to be the world's fastest may not make it.

The folks at Wired are smart and have been on the tech beat for a long time, they should know better than to do puff pieces.  

If interested in this stuff, whether for modeling complex-chaotic systems such as markets or weather or for national security applications or just because supercomputers are amazing in their own right see also:

April 2016
June 2016
China has had the world's fastest computer for the last three years or so, the Tianhe-2, which used Intel microprocessors so this latest computer is a remarkable achievement. The U.S. plans to recapture the top spot for the first time in five years years when Oak Ridge builds their latest machine using IBM CPUs, NVIDIA GPUs and NVIDIA's NV Link tying it all together. The ORNL 'puter should hit either the Nov. 2017 or June 2018 Top 500 lists....
November 15, 2016 

November 16, 2016
Now they're just showing off.

The computer isn't going to be a product line or anything that generates immediate revenues but it puts the company in a very exclusive club and may lead to some in-house breakthroughs in chip design going forward.
The stock is up $4.97 (+5.77%) at $91.16.

To be clear, this isn't someone using NVDA's graphics processors to speed up their supercomputer as the Swiss did with the one they let CERN use and which is currently the eighth fastest in the world or the computer that's being built right now at Oak Ridge National Laboratory and is planned to be the fastest in the world (but may not make it, China's Sunway TaihuLight is very, very fast).

And this isn't the DIY supercomputer we highlighted back in May 2015:
...Among the fastest processors in the business are the one's originally developed for video games and known as Graphics Processing Units or GPU's. Since Nvidia released their Tesla hardware in 2008 hobbyists (and others) have used GPU's to build personal supercomputers.
Here's Nvidias Build your Own page.
Or have your tech guy build one for you....
Nor is it the $130.000 supercomputer NVIDIA came up with for companies to get started in Deep Learning/AI.

No, this is NVIDIA's very own supercomputer.

Here's the brand new list (they come out every six months):
Top500 List - November 2016

July 22, 2017
We're usually more timely posting the list but reality keeps intruding on the blog stuff.
A couple things to point out, we've made a few mentions of the Swiss supercomputer Piz Daint, here's one of them from last November:
NVIDIA Builds Its Very Own Supercomputer, Enters The Top500 List At #28 (NVDA)
...To be clear, this isn't someone using NVDA's graphics processors to speed up their supercomputer as the Swiss did with the one they let CERN use and which is currently the eighth fastest in the world or the computer that's being built right now at Oak Ridge National Laboratory and is planned to be the fastest in the world (but may not make it, China's Sunway TaihuLight is very, very fast)....
You can see the results of the upgrade in the current list, Piz Daint went from 8th fastest to 3rd fastest in the world. 

Possibly also of interest, NVIDIA's 'puter has been bumped down to #32, behind Facebook's AI/machine-learning supercomputer which is based on NVIDIA's DGX-1 and uses NVDA chips as their graphics accelerator....
In fact wary reader may have come to the conclusion that NVIDIA and supercomputers have become an idée fixe for yours truly. From 2 1/2 years ago:

May 2015
Nvidia Wants to Be the Brains Of Your Autonomous Car (NVDA)
We've mentioned, usually in the context of the Top 500* fastest supercomputers, that:
Long time readers know we have a serious interest in screaming fast computers and try to get to the Top500 list a couple times a year. Here is a computer that was at the top of that list, the fastest computer in the world just four years ago. And it's being shut down.
Technology changes pretty fast. 
That was from a 2013 post.

Among the fastest processors in the business are the one's originally developed for video games and known as Graphics Processing Units or GPU's. Since Nvidia released their Tesla hardware in 2008 hobbyists (and others) have used GPU's to build personal supercomputers.
Here's Nvidias Build your Own page.
Or have your tech guy build one for you.

In addition Nvidia has very fast connectors they call NVLink.
Using a hybrid combination of IBM Central Processing Units (CPU's) and Nvidia's GPU's, all hooked together with the NVLink, Oak Ridge National Laboratory is building what will be the world's fastest supercomputer when it debuts in 2018.

As your kid plays Grand Theft Auto....
*Here's the Top 500 site, the next list is due next month. China’s National University of Defense Technology has had the top spot since the June 2013 list when it toppled Oak Ridge National Laboratory's Titan.
Be all that as it may be, here's ORNL's webpage for the new supercomputer, Summit. 

Thursday, January 11, 2018

With the Summit Supercomputer, U.S. Could Retake Computing’s Top Spot (NVDA)

We've been babbling about this 'puter for three years, usually in the context of its use of NVIDIA GPU's and NV Link connections, more after the jump.

From IEEE Spectrum:

Oak Ridge’s 200-petaflop Summit supercomputer will come on line in mid-2018
In November of 2012, the semiannual Top500 rankings of the world’s supercomputers gave top billing to a machine constructed at the Oak Ridge National Laboratory, in Tennessee. Aptly named Titan, the machine boasted a peak performance of more than 27 × 1015 floating-point operations per second, or 27 petaflops. It was an immense computing resource for researchers in government, industry, and academe, and being at the top of the supercomputing heap, it helped to boost pride within the U.S. high-⁠performance computing community.

The satisfaction was short-lived. Just seven months later, Titan lost the world-supercomputing crown to a Chinese machine called Tianhe-2 (Milky Way-2). And three years on, yet another Chinese number-crunching behemoth—the Sunway TaihuLight—took over the title of world’s most powerful supercomputer. Its peak performance was 125 petaflops. After that, Titan wasn’t looking so titanic anymore.

Using the Sunway TaihuLight, Chinese researchers captured the 2016 Gordon Bell Prize [PDF] for their work modeling atmospheric dynamics. “That shows it wasn’t just a stunt machine,” says Jack Dongarra of the University of Tennessee, one of the creators of the Top500 rankings.

You might be wondering why for the past five years the United States has seemingly given up on reclaiming the top spot. In fact, there was no such surrender. In 2014, U.S. engineers drafted proposals for a new generation of supercomputers. The first of these will bear fruit later this year in the form of a supercomputer named Summit, which will replace Titan at Oak Ridge. The new machine’s peak performance will be around 200 petaflops when it comes on line in a few months, which will make it the most powerful supercomputer on the planet.
Maybe.

“We’re very open in the U.S. with our machines,” says Arthur “Buddy” Bland, project director of the Leadership Computing Facility at Oak Ridge. That is, he’s confident that Summit will be completed as planned and that it will be the most powerful supercomputer in the United States. But in the meantime, China, or some other country for that matter, could field a new supercomputer or upgrade an existing one to exceed Summit’s performance. Could that really happen? “We have no idea,” says Bland.

He and his colleagues at Oak Ridge aren’t losing any sleep over the question—and they need all the sleep they can get these days because they still have a lot of work ahead of them as they labor to replace Titan with Summit. They are not, however, following the pattern that they used to build Titan, which was created as a result of a series of increasingly elaborate upgrades to an earlier Oak Ridge supercomputer called Jaguar....MUCH MORE
Previously:
May 2016
NVIDIA Sets New All Time High On Pretty Good Numbers, "Sweeping Artificial Intelligence Adoption" (NVDA)
We are fans.
Before we go any further, our NVIDIA boilerplate: we make very few calls on individual names on the blog but this one is special. 
They are positioned to be the brains in autonomous vehicles, they will drive virtual reality should it ever catch on, the current businesses include gaming graphics, deep learning/artificial intelligence, and supercharging the world's fastest supercomputers including what will be the world's fastest at Oak Ridge next year. 
Not just another pretty face.
Or food delivery app....
Sept. 2017
"The Astonishing Engineering Behind America's Latest, Greatest Supercomputer"

...a) Wired does not mention the graphics processing units are from NVIDIA
b) the Chinese may have taken an insurmountable lead in the need-for-speed derby and the ORNL computer, designed to be the world's fastest may not make it.
The folks at Wired are smart and have been on the tech beat for a long time, they should know better than to do puff pieces.  
If interested in this stuff, whether for modeling complex-chaotic systems such as markets or weather or for national security applications or just because supercomputers are amazing in their own right see also:

April 2016
June 2016
China has had the world's fastest computer for the last three years or so, the Tianhe-2, which used Intel microprocessors so this latest computer is a remarkable achievement. The U.S. plans to recapture the top spot for the first time in five years years when Oak Ridge builds their latest machine using IBM CPUs, NVIDIA GPUs and NVIDIA's NV Link tying it all together. The ORNL 'puter should hit either the Nov. 2017 or June 2018 Top 500 lists....
November 15, 2016 
November 16, 2016
Now they're just showing off.

The computer isn't going to be a product line or anything that generates immediate revenues but it puts the company in a very exclusive club and may lead to some in-house breakthroughs in chip design going forward.
The stock is up $4.97 (+5.77%) at $91.16.

To be clear, this isn't someone using NVDA's graphics processors to speed up their supercomputer as the Swiss did with the one they let CERN use and which is currently the eighth fastest in the world or the computer that's being built right now at Oak Ridge National Laboratory and is planned to be the fastest in the world (but may not make it, China's Sunway TaihuLight is very, very fast).

And this isn't the DIY supercomputer we highlighted back in May 2015:
...Among the fastest processors in the business are the one's originally developed for video games and known as Graphics Processing Units or GPU's. Since Nvidia released their Tesla hardware in 2008 hobbyists (and others) have used GPU's to build personal supercomputers.
Here's Nvidias Build your Own page.
Or have your tech guy build one for you....
Nor is it the $130.000 supercomputer NVIDIA came up with for companies to get started in Deep Learning/AI.

No, this is NVIDIA's very own supercomputer.

Here's the brand new list (they come out every six months):
Top500 List - November 2016

July 22, 2017
We're usually more timely posting the list but reality keeps intruding on the blog stuff.
A couple things to point out, we've made a few mentions of the Swiss supercomputer Piz Daint, here's one of them from last November:
NVIDIA Builds Its Very Own Supercomputer, Enters The Top500 List At #28 (NVDA)
...To be clear, this isn't someone using NVDA's graphics processors to speed up their supercomputer as the Swiss did with the one they let CERN use and which is currently the eighth fastest in the world or the computer that's being built right now at Oak Ridge National Laboratory and is planned to be the fastest in the world (but may not make it, China's Sunway TaihuLight is very, very fast)....
You can see the results of the upgrade in the current list, Piz Daint went from 8th fastest to 3rd fastest in the world. 

Possibly also of interest, NVIDIA's 'puter has been bumped down to #32, behind Facebook's AI/machine-learning supercomputer which is based on NVIDIA's DGX-1 and uses NVDA chips as their graphics accelerator....
In fact wary reader may have come to the conclusion that NVIDIA and supercomputers have become an idée fixe for yours truly. From 2 1/2 years ago:

May 2015
Nvidia Wants to Be the Brains Of Your Autonomous Car (NVDA)
We've mentioned, usually in the context of the Top 500* fastest supercomputers, that:
Long time readers know we have a serious interest in screaming fast computers and try to get to the Top500 list a couple times a year. Here is a computer that was at the top of that list, the fastest computer in the world just four years ago. And it's being shut down.
Technology changes pretty fast. 
That was from a 2013 post.

Among the fastest processors in the business are the one's originally developed for video games and known as Graphics Processing Units or GPU's. Since Nvidia released their Tesla hardware in 2008 hobbyists (and others) have used GPU's to build personal supercomputers.
Here's Nvidias Build your Own page.
Or have your tech guy build one for you.

In addition Nvidia has very fast connectors they call NVLink.
Using a hybrid combination of IBM Central Processing Units (CPU's) and Nvidia's GPU's, all hooked together with the NVLink, Oak Ridge National Laboratory is building what will be the world's fastest supercomputer when it debuts in 2018.

As your kid plays Grand Theft Auto....
*Here's the Top 500 site, the next list is due next month. China’s National University of Defense Technology has had the top spot since the June 2013 list when it toppled Oak Ridge National Laboratory's Titan.
Be all that as it may be, here's ORNL's webpage for the new supercomputer, Summit. 

Tuesday, June 21, 2016

Milestone: China Builds The (NEW) World's Fastest Supercomuter Using Only Chinese Components (and other news) INTC; NVDA; IBM

China has had the world's fastest computer for the last three years or so, the Tianhe-2, which used Intel microprocessors so this latest computer is a remarkable achievement. The U.S. plans to recapture the top spot for the first time in five years years when Oak Ridge builds their latest machine using IBM CPUs, NVIDIA GPUs and NVIDIA's NV Link tying it all together. The ORNL 'puter should hit either the Nov. 2017 or June 2018 Top 500 lists.

Here's more of the story in three parts. First up, the Daily Signal:

China Builds World’s Fastest Computer
On Monday, the inconceivable happened. China announced it had built the world’s fastest computer. China has always been good at copying and/or stealing intellectual property, but it has rarely produced “indigenous innovation,” particular in the high-tech sector.

The Chinese supercomputer, called the Tianhe-1A, is capable of performing over 2.5 thousand trillion operations a second and is big enough to fill a large warehouse. The processors weigh over 150 tons and can store information equal to about a hundred million books.

At its peak, the computer can perform around 93,000 trillion calculations per second. Purportedly, the Chinese supercomputer is 30 percent faster than the fastest American computer. “Considering that just 10 years ago, China claimed a mere 28 systems on TOP500 global supercomputer listing, with none ranked in the top 30, the nation has come further and faster than any other country in the history of supercomputing,” said Top500.

Last year the U.S. blocked Intel from shipping faster semiconductor chips to China on national security grounds. According to the New York Times, the United States blocked the sale of advanced microprocessors to China over concerns they were being used in nuclear weapon development. Without the Intel chips, the Chinese were forced to develop their own semiconductors, which apparently they are doing.

Pierre Ferragu, an industry technical analyst, said the new rankings showed that China was “pulling together all the building blocks of an independent semiconductor value chain.”...
And the lead story at Top 500 (along with the new semi-annual rankings), June 20, 2016:

New Chinese Supercomputer Named World’s Fastest System on Latest TOP500 List
System achieves 93 petaflop/s running LINPACK on Chinese-designed CPUs China draws Equal to the U.S . in Overall Installations
FRANKFURT, Germany; BERKELEY, Calif.; and KNOXVILLE, Tenn.—China maintained its No. 1 ranking on the 47th edition of the TOP500 list of the world’s top supercomputers, but with a new system built entirely using processors designed and made in China. Sunway TaihuLight is the new No. 1 system with 93 petaflop/s (quadrillions of calculations per second) on the LINPACK benchmark.

Developed by the National Research Center of Parallel Computer Engineering & Technology (NRCPC) and installed at the National Supercomputing Center in Wuxi, Sunway TaihuLight displaces Tianhe-2, an Intel-based Chinese supercomputer that has claimed the No. 1 spot on the past six TOP500 lists.

The newest edition of the list was announced Monday, June 20, at the 2016 International Supercomputer Conference in Frankfurt. The closely watched list is issued twice a year.

Sunway TaihuLight, with 10,649,600 computing cores comprising 40,960 nodes, is twice as fast and three times as efficient as Tianhe-2, which posted a LINPACK performance of 33.86 petaflop/s. The peak power consumption under load (running the HPL benchmark) is at 15.37 MW, or 6 Gflops/Watt.

This allows the TaihuLight system to grab one of the top spots on the Green500 in terms of the Performance/Power metric.  Titan, a Cray XK7 system installed at the Department of Energy’s (DOE) Oak Ridge National Laboratory, is now the No. 3 system. It achieved 17.59 petaflop/s.

Rounding out the Top 10 are Sequoia, an IBM BlueGene/Q system installed at DOE’s Lawrence Livermore National Laboratory; Fujitsu’s K computer installed at the RIKEN Advanced Institute for Computational Science (AICS) in Kobe, Japan; Mira, a BlueGene/Q system installed at DOE’s Argonne National Laboratory; Trinity, a Cray X40 system installed at DOE/NNSA/LANL/SNL; Piz Daint, a Cray XC30 system installed at the Swiss National Supercomputing Centre  and the most powerful system in Europe; Hazel Hen, a Cray XC40 system installed at HLRS in Stuttgart, Germany; and Shaheen II, a Cray XC40 system installed at King Abdullah University of Science and Technology (KAUST) in Saudi Arabia is at No. 10.

The latest list marks the first time since the inception of the TOP500 that the U.S is not home to the largest number of systems. With a surge in industrial and research installations registered over the last few years, China leads with 167 systems and the U.S. is second with 165. China also leads the performance category, thanks to the No. 1 and No. 2 systems....MORE
More detail at Top 500's "China Tops Supercomputer Rankings with New 93-Petaflop Machine".

And from Forbes, June 17, a story we decided to hold until the Top500 list came out:

What To Expect At Next Week's ISC Supercomputing Event
Next week I will attend the annual international supercomputing event (now renamed the ISC High Performance Conference) in Frankfurt, Germany. This conference is the “tock” to the annual US-based “tick” Supercomputing event, which takes place every year after Thanksgiving and before Christmas. The European show is typically much smaller than its US cousin but affords attendees a close-up look into the vendors’ plans and the amazing science being conducted at global supercomputing centers and institutions. And it is always a good party, with some 3,000 attendees expected to make the trek to Frankfurt this year.

This will be the first ISC event to my knowledge where the keynote address is not about traditional High Performance supercomputing topics like simulation and modeling. This year, the keynote speaker is Andrew Ng, Chief Scientist at Baidu and associate professor at Stanford University. Andrew is a leading researcher in Artificial Intelligence (AI) and a high-profile advocate for the science of Machine Learning and Deep Neural Networks (DNN). This is a noteworthy departure from the norm, as the traditional High Performance Computing (HPC) community has a lot to gain from employing the techniques being researched and deployed by the DNN community and internet giants like Google, Amazon.com Facebook, Microsoft and others.

In addition to some awesome brews and brats, here are some topics I hope to learn more about at the show.
  1. I expect we will see a status update on the upcoming Intel INTC “Knights Landing” multi-core Xeon Phi, which is expected to ship later this year. In addition to speeds and feeds, I’d like to see how it will compare to NVIDIA GPUs, especially the new Pascal generation of boards that will begin shipping about the same time. I am especially keen to learn about any benchmarks the company can provide for Deep Learning and to hear about Intel’s plans to invest in the Deep Learning Ecosystem.
  2. It is also about time that we hear more from Intel on their plans for Altera FPGAs, especially as it relates to HPC and Deep Learning. Will it target training for Deep Learning, and if so, how will the company position FPGAs with respect to Xeon Phi?
  3. From NVIDIA, I will want to hear about the productization of the Pascal P100 chip in Tesla products and also about the company’s plans for the inference side of Deep Learning outside of the automotive and embedded space where they have already mastered the market with the DrivePX2 platform. Specifically, I’d like to hear how the company plans to compete with the Google Tensor Processor for cloud AI services....
...MORE 

Monday, June 4, 2018

Nvidia debuts cloud server platform to unify AI and HPC (NVDA)

This is a pretty big deal.
High Performance Computing has been a separate and distinct class of machines since the University of Manchester's Atlas computer went online in 1962.
And now NVIDIA is blurring the line in a few different ways.

First off, existing and planned supercomputers use graphics processing units to accelerate processing speeds at the very top of the heap. Here's January 2018's "China set to launch its new supercomputer"
NVIDIA watches, some commentary after the jump...

.... On the last Top500 list of the world's fastest supercomputers, 87 systems use NVIDIA GPUs as accelerators. Not the top two however. You have to go down to #3, Switzerland's Piz Daint which was upgraded with NVIDIA chips in 2016. China is using an entirely different architecture which we touched on in June 2016's "Milestone: China Builds The (NEW) World's Fastest Supercomuter Using Only Chinese Components (and other news) INTC; NVDA; IBM".
Then there was NVIDIA's own supercomputer highlighted in November 2016:

Now they're just showing off.

The computer isn't going to be a product line or anything that generates immediate revenues but it puts the company in a very exclusive club and may lead to some in-house breakthroughs in chip design going forward.
The stock is up $4.97 (+5.77%) at $91.16.

To be clear, this isn't someone using NVDA's graphics processors to speed up their supercomputer as the Swiss did with the one they let CERN use and which is currently the eighth fastest in the world or the computer that's being built right now at Oak Ridge National Laboratory and is planned to be the fastest in the world (but may not make it, China's Sunway TaihuLight is very, very fast)....
Before that they explained how to build your own (remember, this was pre-media-rapture, hence the explanatory tone) supercomputer, relayed in May 2015's:
Nvidia Wants to Be the Brains Of Your Autonomous Car (NVDA)
...Among the fastest processors in the business are the one's originally developed for video games and known as Graphics Processing Units or GPU's. Since Nvidia released their Tesla hardware in 2008 hobbyists (and others) have used GPU's to build personal supercomputers.
Here's Nvidias Build your Own page.
Or have your tech guy build one for you....
NVIDIA really started blurring the lines  with their mini-supercomputer for training AI in 2016:
Technology Review on NVIDIA's Pint-Sized Supercomputer (NVDA)
$129,000

In April 2018 the company had a new offering that stunned reviewers into caveman-speak:
UPDATED—NVIDIA Wants to Be the Brains Behind the Surveillance State (NVDA)
The company just rolled out a $399,000 two-petaflop supercomputer that every little totalitarian and his brother is going to lust after to run their surveillance-city smart-city data slurping dreams.

The coming municipal data centers will end up matching the NSA in total storage capacity and NVIDIA wants to be the one sifting through it all. More on this down the road, for now here's the beast.

From Hot Hardware:
NVIDIA Unveils Beastly 2 Petaflop DGX-2 AI Supercomputer With 32GB Tesla V100 And NVSwitch Tech (Updated)...
Ummm, beast fast.

For reference the #500 supercomputer in the world is an HPE machine that clocks in at a theoretical maximum speed of 712.9 teraflops at NASA/Goddard Space Flight Center's Climate Simulation Platform, part of the 3.5 petaflop Discover system.

That means you can buy an off-the-shelf supercomputer that is almost three times faster than the a 'puter on the Top500 list. Anyway, that's a much longer than usual introduction, here's the story at Silicon Angle, May 30:

Nvidia debuts cloud server platform to unify AI and high-performance computing
Hoping to maintain the high ground in artificial intelligence and high-performance computing, Nvidia Corp. late Tuesday debuted a new computing architecture that it claims will unify both fast-growing areas of the industry.
The announcement of the HGX-2 cloud-server platform (pictured), made by Nvidia Chief Executive Jensen Huang at its GPU Technology Conference in Taipei, Taiwan, is aimed at many new applications that combine AI and HPC.

“We believe the future requires a unified platform for AI and high-performance computing,” Paresh Kharya, product marketing manager for Nvidia’s accelerated-computing group, said during a press call Tuesday.

Others agree. “I think that AI will revolutionize HPC,” Karl Freund, a senior analyst at Moor Insights & Strategy, told SiliconANGLE. “I suspect many supercomputing centers will deploy HGX2 as it can add dramatic computational capacity for both HPC and AI.”

More specifically, the new architecture enables applications involving scientific computing and simulations, such as weather forecasting, as well as both training and running of AI models such as deep learning neural networks, for jobs such as image and speech recognition and navigation for self-driving cars. “These models are being updated at an unprecedented pace,” sometimes as often as hourly, Kharya said.

The HGX architecture, powered by Nvidia’s graphics processing units, or GPUs, is a data center design used in Microsoft Corp.’s Project Olympus initiative, Facebook Inc.’s Big Basin systems and Nvidia’s own DGX-1 AI supercomputers as well as services from public cloud computing leader Amazon Web Services Inc. The first version of the architecture, the HGX-1, was announced a year ago.

Essentially, the HGX-2, which consists of 16 of Nvidia’s high-end V100 GPUs, provides a building block for computer makers to create the systems that. Using Nvidia’s NVLink chip interconnect system, it makes the 16 GPUs look like one, the company said, delivering 2 petaflops, or 2 quadrillion floating point operations per second, a standard computing speed measure.

“Basically, you can now use HGX as a pool of 16 GPUs as if it were a single very large compute resource,” Freund explained.

Nvidia also said today that its own recently announced DGX-2 AI supercomputer was the first system to use HGX-2. It will sell for $399,000 when it’s available in the third quarter. Huang joked on a livestream of his conference keynote that it’s a “great value,” though he appeared to mean it as well....MUCH MORE
The stock is up  $6.99 (+2.71%) at $264.61.

See also:
March 2018 
Exascale Computers: Competing With China Is About to Get Serious
...The 200 petaflop/second (quadrillions of calculations per second) Summit supercomputer being built at Oak Ridge National Laboratory will blow past the current world's fastest, the 93 petaflop/s Sunway TaihuLight but this latest monster is planned to be five times faster still.

At that point, if you were to use it for, saaay, training artificial intelligence, you are going places that the human mind literally can't comprehend much less forecast or, dream on, guide....