Monday, August 5, 2013

Errrmmm, Yes: "Parallel Supercomputing: Past, Present and Future"

In Sunday's "Big Data Takes Center Stage" I intro'd to Irving Wladawsky-Berger's post saying:
...First, I realized that the company would be creating the science on the fly and because that is inefficient the computers they probably needed would be right at home at Lawrence Livermore or Sandia...

...Anyhoo, enough history, here's a guy who more than likely sat in meetings with the crew who built LLNL's 1999 vintage supercomputer, the ASCI Blue-Pacific SST, IBM SP 604e.
Errrmmm, yes.

From Irving Wladawsky-Berger:
From the early days of the industry, supercomputers have been pushing the boundaries of IT, identifying the key barriers to overcome and experimenting with technologies and architectures that are then incorporated into the overall IT market a few years later.  While we generally focus on their computational capabilities as measured in FLOPS, - Floating-point Operations Per Second, -  supercomputers have been at the leading edge in a number of additional dimensions, including the storage and analysis of massive amounts of data; very high bandwidth networks; and highly realistic visualizations.

Through the 1960s, 1970s and 1980s, the fastest supercomputers were based on highly specialized, powerful technologies.  But, by the late 1980s, these complex and expensive technologies ran out of gas and parallel computing became the only realistic alternative to scaling up performance.  

Instead of building machines with a small number of very fast and expensive processors, the early parallel supercomputers ganged together 10s, 100s, and over time 1000s of much less powerful and inexpensive CMOS microprocessors, similar to the micros used in the rapidly growing personal computer and workstation industry.  A similar evolution to microprocessor components and parallel architectures took place a few years later in the mainframes used in commercial applications.

The transition to parallel supercomputing was seismic in nature.  Everything changed, from the underlying computer architecture, to the operating systems, programming tools, mathematical methods and applications.  It took considerable research and experimentation to learn to effectively use these new kinds of machines.  Moreover, there were widely different parallel architecture designs, some coming from universities and others from industry.  It wasn’t clear at all which designs worked well for different kinds of applications and would thus be commercially viable. 

The Department of Energy (DOE) national labs have long been among the world’s leading users of advanced supercomputers and played a leading role in the transition to parallel architectures.  In 1983, the DOE’s Argonne National Lab established the Advanced Computing Research Facility (ACRF), an experimental parallel computing lab which brought together computer scientists, applied mathematicians and supercomputer users and vendors to learn how to best use this new generation of parallel machines. 
This past May, Argonne convened a Symposium to mark the 30th anniversary of the ACRF.  The Symposium looked both at the progress made in parallel computing over the past 30 years and the major trends for the future.  I attended the Symposium and led a panel on The Impact of Parallel Computing on the World.
I have a long personal association with supercomputing and with Argonne.  Argonne has been operated by the University of Chicago, my alma mater, since it was founded in 1946 with renowned U of C physicist Enrico Fermi as its first director.  As a U of C physics graduate student in the 1960s, I used supercomputers extensively to conduct atomic and molecular calculations as part of my research with professor Clemens Roothaan, my thesis advisor and one of the early leaders in computational sciences.  There was considerable interaction between researchers at the U of C and those at nearby Argonne.  
At the time, professor Roothaan was consulting with IBM on the design of of a new generation of supercomputers, and I got involved looking at how to best program mathematical algorithms for these machines....MUCH MORE
Alrighty then.Now you know the reason reason our first link to Mr. Wladawsky-Berger's blog was titled:
Irving Wladawsky-Berger is Much Smarter Than I: "The Digitization of the Economy"