Does it still make sense? Top 500 Supercomputing list


I am in Portland, Oregon and the new list for Top 500 supercomputing sites will be made available tomorrow November 16, 2009.

So what? Many people ask this question today. "So what?" The business model this list promotes brought the bankruptcy of SGI, Thinking Machines, Cray Research, SiCortex and many who designed supercomputers based on one criteria: to pass a LINPACK test. This was originally introduced in 1979, 30 years ago. It tests the floating point and little more. LINPACK tells nothing of how easy is to solve complex problems with a given supercomputer.

Sure Science and Defense need these supercomputers. They always did. However, once one developed such a winner, it was difficult, if not impossible to sell it to a commercial entity, who also needs these powerful computers, but they must make money from the investment.

The LINPACK test, even in its' most refined modern form - tells little or nothing of how useful the supercompter is. Yet, adding insult to injury, the Top 500 judges claim that
Any system designed specifically to solve the LINPACK benchmark problem or have as its major purpose the goal of a high Top500 ranking will be disqualified..
Then, who will use this system? It is like buying a Formula One car for personal use. I can not even drive it to buy milk without cohorts of mechanics supporting me. Never mind I can not take any passengers... and the ten million dollars price tag..

By analogy, the computers we need to make money with are the computers our customers will make money with.

Time has come to compile new lists, in addition to LINPACK. We should take actual applications used by enterprises and test the fastest supercomputers running them.

We can havefor example a TOP 500 for E-OLTP (Extreme on line transaction processing) computers designed to process more than 500,000 transactions per second, now that all banking and credit card processing and stock exchange need these types of volumes. We can havea weather simulator application top 500 and genomic TOP 500 lists, etc.

We can add a a TOP 100 supercomputing clouds lists based on specific bench-marked services delivered

We need to make the competition of TOP 500 and its winners improve in directions we can create additional wealth.

These are my thoughts as I wait, with an indifference that makes me feel guilty, the new 2009 TOP 500 LINPACK supercomputers results. This is why I voice these ideas for new TOP 500 lists, not based on abstract - and with less and less impact- LINPACK test only. We want the TOP 500 list to remain relevant and not push itself and the HPC market into insignificance.

It is time to create HPC entrepreneurs who become rich and successful and not bitter from failed ventures.

Comments

Platypus said…
Yeah, except that SiCortex *didn't* build a machine to do well on LINPACK, and never had a system on the Top500. Whatever mistakes they made, they did exactly what you suggest wrt building a machine that customers could make money with.

The bitterness I sense here is not from those who tried and failed to do something new in HPC. I for one am glad to have tried, instead of just peddling the Same Old Stuff at the Same Old Company. At least there was some fun to be had along the way.
Andrey said…
I agree with you in the sense that a LINPACK test is a coarse measure of "utility" of a supercomputer. However, I disagree with the notion that number of transactions is the only thing that matters to banking/cc/financial institutions. Sure, DB transactions per second is useful to gather data but absolutely irrelevant for the statistical analysis that those institutions do every day.

Now, if you really would like to take this to a extreme them how about google's approach? thousands of off the shelf computers living on a cluster. Here LINPACK is simply pointless since the trade off is between bandwidth and processing power.
Platypus said…
That's why I think multi-factor benchmarks like HPCC or SPEC are better than single-factor benchmarks like LINPACK or TPC. The only people whose applications map well to a benchmark are those who won the highly political game of having their application be the basis for the benchmark. For everyone else, the best approach is:

(1) Characterize your own application balance wrt floating point vs. integer vs. memory vs. bandwidth vs. latency and so on.

(2) Pick the most appropriate multi-factor benchmark, and assign weight to the components so that the weighted totals will match your own application balance.

(3) Use the weighted totals to pick two or three likely candidates, and do a live "bake-off" using the real application on real hardware.

Linear ranking in a multi-dimensional space is an exercise in ego for the sellers and laziness for the buyers. The way the Top500 folks break things down can also shed some light on architectural and component-choice trends, but many other benchmarks lack even that redeeming value. The only thing worse is a totally non-empirical "he says, she says" kind of comparison - which is exactly what most benchmark-bashing vendors actually want you to rely on.
Very good comments. We need this conversation to brainstorm new TOP 500 like lists that will make the winners serious candidates for financial success.

The current LINPACK benchmark drives the industry in a direction where commercial success is not taken into account.

My suggestion for E-OLTP transactions was just an illustration of the many possibilities

Popular Posts