Here's more of the story in three parts. First up, the Daily Signal:
China Builds World’s Fastest Computer
On Monday, the inconceivable happened. China announced it had built the world’s fastest computer. China has always been good at copying and/or stealing intellectual property, but it has rarely produced “indigenous innovation,” particular in the high-tech sector.And the lead story at Top 500 (along with the new semi-annual rankings), June 20, 2016:
The Chinese supercomputer, called the Tianhe-1A, is capable of performing over 2.5 thousand trillion operations a second and is big enough to fill a large warehouse. The processors weigh over 150 tons and can store information equal to about a hundred million books.
At its peak, the computer can perform around 93,000 trillion calculations per second. Purportedly, the Chinese supercomputer is 30 percent faster than the fastest American computer. “Considering that just 10 years ago, China claimed a mere 28 systems on TOP500 global supercomputer listing, with none ranked in the top 30, the nation has come further and faster than any other country in the history of supercomputing,” said Top500.
Last year the U.S. blocked Intel from shipping faster semiconductor chips to China on national security grounds. According to the New York Times, the United States blocked the sale of advanced microprocessors to China over concerns they were being used in nuclear weapon development. Without the Intel chips, the Chinese were forced to develop their own semiconductors, which apparently they are doing.
Pierre Ferragu, an industry technical analyst, said the new rankings showed that China was “pulling together all the building blocks of an independent semiconductor value chain.”...
New Chinese Supercomputer Named World’s Fastest System on Latest TOP500 List
System achieves 93 petaflop/s running LINPACK on Chinese-designed CPUs China draws Equal to the U.S . in Overall Installations
FRANKFURT, Germany; BERKELEY, Calif.; and KNOXVILLE, Tenn.—China maintained its No. 1 ranking on the 47th edition of the TOP500 list of the world’s top supercomputers, but with a new system built entirely using processors designed and made in China. Sunway TaihuLight is the new No. 1 system with 93 petaflop/s (quadrillions of calculations per second) on the LINPACK benchmark.More detail at Top 500's "China Tops Supercomputer Rankings with New 93-Petaflop Machine".
Developed by the National Research Center of Parallel Computer Engineering & Technology (NRCPC) and installed at the National Supercomputing Center in Wuxi, Sunway TaihuLight displaces Tianhe-2, an Intel-based Chinese supercomputer that has claimed the No. 1 spot on the past six TOP500 lists.
The newest edition of the list was announced Monday, June 20, at the 2016 International Supercomputer Conference in Frankfurt. The closely watched list is issued twice a year.
Sunway TaihuLight, with 10,649,600 computing cores comprising 40,960 nodes, is twice as fast and three times as efficient as Tianhe-2, which posted a LINPACK performance of 33.86 petaflop/s. The peak power consumption under load (running the HPL benchmark) is at 15.37 MW, or 6 Gflops/Watt.
This allows the TaihuLight system to grab one of the top spots on the Green500 in terms of the Performance/Power metric. Titan, a Cray XK7 system installed at the Department of Energy’s (DOE) Oak Ridge National Laboratory, is now the No. 3 system. It achieved 17.59 petaflop/s.
Rounding out the Top 10 are Sequoia, an IBM BlueGene/Q system installed at DOE’s Lawrence Livermore National Laboratory; Fujitsu’s K computer installed at the RIKEN Advanced Institute for Computational Science (AICS) in Kobe, Japan; Mira, a BlueGene/Q system installed at DOE’s Argonne National Laboratory; Trinity, a Cray X40 system installed at DOE/NNSA/LANL/SNL; Piz Daint, a Cray XC30 system installed at the Swiss National Supercomputing Centre and the most powerful system in Europe; Hazel Hen, a Cray XC40 system installed at HLRS in Stuttgart, Germany; and Shaheen II, a Cray XC40 system installed at King Abdullah University of Science and Technology (KAUST) in Saudi Arabia is at No. 10.
The latest list marks the first time since the inception of the TOP500 that the U.S is not home to the largest number of systems. With a surge in industrial and research installations registered over the last few years, China leads with 167 systems and the U.S. is second with 165. China also leads the performance category, thanks to the No. 1 and No. 2 systems....MORE
And from Forbes, June 17, a story we decided to hold until the Top500 list came out:
What To Expect At Next Week's ISC Supercomputing Event
Next week I will attend the annual international supercomputing event (now renamed the ISC High Performance Conference) in Frankfurt, Germany. This conference is the “tock” to the annual US-based “tick” Supercomputing event, which takes place every year after Thanksgiving and before Christmas. The European show is typically much smaller than its US cousin but affords attendees a close-up look into the vendors’ plans and the amazing science being conducted at global supercomputing centers and institutions. And it is always a good party, with some 3,000 attendees expected to make the trek to Frankfurt this year....MORE
This will be the first ISC event to my knowledge where the keynote address is not about traditional High Performance supercomputing topics like simulation and modeling. This year, the keynote speaker is Andrew Ng, Chief Scientist at Baidu and associate professor at Stanford University. Andrew is a leading researcher in Artificial Intelligence (AI) and a high-profile advocate for the science of Machine Learning and Deep Neural Networks (DNN). This is a noteworthy departure from the norm, as the traditional High Performance Computing (HPC) community has a lot to gain from employing the techniques being researched and deployed by the DNN community and internet giants like Google, Amazon.com Facebook, Microsoft and others.
In addition to some awesome brews and brats, here are some topics I hope to learn more about at the show.
- I expect we will see a status update on the upcoming Intel INTC “Knights Landing” multi-core Xeon Phi, which is expected to ship later this year. In addition to speeds and feeds, I’d like to see how it will compare to NVIDIA GPUs, especially the new Pascal generation of boards that will begin shipping about the same time. I am especially keen to learn about any benchmarks the company can provide for Deep Learning and to hear about Intel’s plans to invest in the Deep Learning Ecosystem.
- It is also about time that we hear more from Intel on their plans for Altera FPGAs, especially as it relates to HPC and Deep Learning. Will it target training for Deep Learning, and if so, how will the company position FPGAs with respect to Xeon Phi?
- From NVIDIA, I will want to hear about the productization of the Pascal P100 chip in Tesla products and also about the company’s plans for the inference side of Deep Learning outside of the automotive and embedded space where they have already mastered the market with the DrivePX2 platform. Specifically, I’d like to hear how the company plans to compete with the Google Tensor Processor for cloud AI services....