[l/m 2/8/2008] What *IS* a super? comp.par/comp.sys.super (18/28) FAQ
(too old to reply)
Eugene Miya
2008-01-18 13:03:02 UTC
Archive-Name: superpar-faq
Last-modified: 8 Feb 2006

18 Supercomputing and Crayisms
20 IBM and Amdahl
22 Grand challenges and HPCC
24 Suggested (required) readings
26 Dead computer architecture society
28 Dedications
2 Introduction and Table of Contents and justification
4 Comp.parallel news group history
6 parlib
8 comp.parallel group dynamics
10 Related news groups, archives and references

Not heard in these parts:

This computer is good for {select one: 51%, 90% 99%}
of your needs.

What's constitutes a supercomputer?
What makes a supercomputer?

The fastest, most powerful machine to solve a problem today.
Generally credited to Dr. Sid Fernbach, George Michael and
and Jack Worlton, and others.

Millennial Edition of the
American National Standard Dictionary of Information Technology (ANSDIT):
on http://www.cbema.org/ncits/ :

Any of the class of computers that have the highest processing speed
and capacity available at a given time.

What if I qualify that with "cost?" ["for the cheapest"]
Then, it's not a supercomputer. Period.

It might be a minisupercomputer, though.
Don't let George know that I said that (he's much more hardline).


Most likely on an AEC viewgraph in the late 1950s thru early 1970s.

The earliest published reference I have in my biblio is:

%A T. C. Chen
%T Unconventional Superspeed Computer Systems
%J Proceedings AFIPS Spring Joint Computer Conference
%D 1971
%P 365-371
%K btartar

Parallel systems were also under construction at the time.

What was the first supercomputer?

Probably the most credit goes to the Cray-1. Cray himself never used the term,
but clearly the CDC 7600 and the CDC 6600 are given definite credit
(also Cray designs). The IBM STRETCH and other 7xxx machines were
certainly among the most powerful computers before then,
but it is important to remember that were few computers in those days.
Restated: all computers were "super" in those days.

Other answers

0) A Japanese company. ;-)

1) "My definition is 'best,'
"A Supercomputer is the one that runs your problem(s) the fastest.""

2) "A supercomputer is a device for converting a CPU-bound problem
into an I/O bound problem." [Ken Batcher]

3) "A supercomputer is one that is only one generation behind what
you really need." Neil Lincoln's definition.

3a) "Hardware above and beyond, software behind and below"

3b) A machine to solve yesterday's problems at today's speeds.

4) Page _one_ of the Linpack-report...

What is Linpack? (LINPACK)

Linpack100x100 - All Fortran, dominated by daxpy unless advanced compiler
optimizations are available. Seldom quoted in marketing literature
because the performance is much lower than the following two.
However, Dongarra sorts his chart by machine performance on this benchmark.
Linpack1000x1000 - Typically Vendor Library routines which use BLAS3 or
LAPACK routines (N**2 data refs for N**3 operations)
Shows single processors with high floating point capacity in favorable
light, so often quoted in marketing literature.
Linpack NxN - problem size determined by Vendor, good for parallel machines
since with correct choice of problem size can maximize the computation
per communication step. Often quoted in marketing literature for
the larger parallel systems.

"A supercomputer is a machine which costs between $7M and $20M.
[~1984 prices].
[Today, I guess you could change the range to $10M-$30M or so (how much is a
full-up T-90 go for at the usual discount? {
T932 = $41M or so, depending on how much RAM you put in it, pre-discount.
This was right before the SGI/Cray split.
http://www.research.microsoft.com/barc/gbell/craytalk/index.htm is dead,
instead. }]


For some strange and mysterious reason, this really used
to bug people who wanted to believe that "supercomputers"
had a kind of magical, mystical aura. For some reason,
the same folks would get mad when, by the numbers, their
PC's were about ~1/1,000,000 of the then-current Cray & CDC -
they also wanted to "believe in" their PCs. My puzzlement
over this double denial is probably why I am not a successful politician.
--Hugh LeMaster

See also Grand Challenges panel.

Where do the terms minisupercomputer and Crayette come from?

Convex Computer Corp. [Now part of H-P] coined the term "minisupercomputer"
and that has largely stuck even though they consider themselves now
a full fledged super computer company. Need to check some Datamation article.

"Crayette" came from Datamation for Scientific Computer Systems [SCS],
because SCS had an Cray/COS object code compatible X-MP machine at
a fraction of the cost/performance.


At this time, the *some of* SGI assets of the former Cray Research, Inc.
have been acquired by Tera Computer, Inc. The new entity is tentatively
to be named
Cray, Inc.
Tera will have the rights to the Cray name as well as T90, T-3, and SV lines.
Information should be considered "fluid."

Something like 2/3 of the "assets" of the former Cray Research
are staying with the mother company (SGI)...at least here in Chippewa Falls.

The news group has covered a variety of Crayisms or sayings (some are

%A Russell Mitchell
%T The Genius: Meet Seymour Cray, Father of the Supercomputer
%J Business Week
%N 3157
%D April 30, 1990
%P 80-86
%K Cover story, biography, circular slide rule, Cray-3,
%X Text of this article is available via Dialog(R) from McGraw-Hill

Some of these are Rollwagon-isms.

On Schedules and bureaucracy:
"Five Year Goal: Build the biggest computer in the world.
One Year Goal: Achieve one-fifth of the above."

On 2s-complement arithmetic.

'Although many "Seymour stories" are based in fact,
most are wildy exaggerated:'

On digging holes (tunnels): a 12-foot hole for wind surfing gear.

On burning boats (Rollwagen: made up the party and Carolyn Cray Bain:
"it was the easiest way to get rid of a boat").

Virtual Memory (compared with sex).
"Memory is like an orgasm - it's better when you don't have to fake it."
"You can't fake what you don't have".
"Can't use what you ain't got!"
"In this business, you can't fake what you don't have"
[Gee, I guess this quote makes this FAQ R-rated.]

"If you were plowing a field, which would you rather use? Two strong oxen
or 1024 chickens?"
-- Seymour Cray

Scene: 1979 Cray Research, Inc. Annual Meeting
Lutherin Brotherhood Building, Minneapolis, Mn.
Q & A period, after the address by the Officers of the Company,

Q: "Mr. Cray, ... Since you seem to have implemented almost
all of the current schemes published in the scientific press
on improving performance in your systems, I was wondering
why you didn't also provide for virtual memeory?"

A: From Mr. Cray: "Well as you know, over the years I have
provided the largest physical memories available for use.
The addition of a "virtual memory" scheme would have
added another level of hardware and hardware addressing
delays in accessing code and data.
I believe that it's better to spend the resource providing for
a larger overall memory system for the programmer. ...
Historically, this is what the programmers have preferred."

On wood paneling.

I hear Seymour Cray designs machines on his Apple MacIntosh.
And that Apple designs MacIntoshes on their Cray.

%A Marcelo A. Gumucio
%T CRI Corporate Report
%J Cray User Group 1988 Spring Proceedings
%C Minneapolis, MN
%D 1988
%P 23-28
%K 21st Meeting
%X Seymour has 6 Apple Macs (Macintosh) used to design Crays (not just one).
Q&A section.

[Gordon Bell {See the IBM panel} admits he designs his computers on Macs, too.]
[Edward Teller designs thermonuclear devices on a Mac.]

Alas, this is getting old. Seymour died.
Apple is only using their EL as a file server.

Apple Computer owned 4 systems:
sn210 & sn1104 -- X-MPs,
sn1622 -- Y-MP, &
sn5414 -YMP/EL [3 Dec 93--96 Grumman lease back, East coast? -- 1 Jun 98?]

We have also covered the parity quote (panel 10).
1) Mr. Cray had always worked with core (yes Virginia, little ferrite
toruses with wires hand threaded through them). Core memory was rock stable
& almost *never* failed. My RCA 70/45 crashed 3 times in 4 years with
memory parity errors and one of those crashes was due to a friend hitting
the A/C Power Emercency Off button on the console!

2) When he designed the first Cray-1, s/n-1, Mr. Cray used RAM chips with
straight parity. The system was installed at the Los Alamos National
Laboratory. It averaged 20 minutes of blinding speed per system failure
(due to a parity error in memory). This was obviously a problem, so, after
consulting with the LANL folks ...

3) Development was halted on s/n-2.
The next machine, s/n-3, was designed
with Single Error Correction - Double Error Detection (SECDED) parity in
its 1 MW memory.
This machine was sold to the National Center for Atmospheric Research (NCAR)
where it ran (with very few double-bit error crashes) for many years.
An aside here is that NCAR had the absolute audacity to
require that an Operating System come with the system, so Cray hired a
(shudder) programmer to write one!

4) Note that this is memory! The Cray-1 line had SECDED memory. No parity
checking was done in the CPU. The same was done for the X-MP. The Y-MP
extended parity checking to the CPU. ...
Actually, the Lab had no influence on Seymour's choices. He chose
60 because the largest word size up to that point was 46 bits in the
1604 and Transac S-2000. Seymour always listened to his customers,
but he made his own decisions. The designers at the Lab based on
STRETCH usage, knew that 32 bits wasn't enough for adequate
accuracy, and that 64 bits was more than enough. Therefore, 60 bits
was probably sufficient. As for parity, since we had survived
previous IBM unchecked machines (702, 704, 709, 7090, . . .), out
leaders stood with Seymour; "Parity is for Farmers."
Cray and new ideas (non-cray)

The story frequently goes:
A bright student or architect somehow manages to get time to visit
Seymour. Cray will listen to that student's ideas and nod understanding
or disagreement. He listens to a few ideas, but he makes a comment like
"Sounds good."
But that does NOT mean that Seymour will take the idea and place it into
His architectures. Too many people with improvements attempt
(fewer these days) to suggest improvements where Seymour is thinking:
If your idea is so good, why don't YOU run with it? Leave my ideas
(and his infrastructure) to me.

Seymour is an expert on cooling and refrigeration technology.
See the Cray bio "The Supermen" by Charles Murray (Wiley, 1997).

Seymour Comparisons

Typical comparisons
Edwin Land, Polaroid, 2nd largest number of patents
Kelly Johnson, Lockheed Skunk Works (KISS principle), F-104, U-2, SR-71

Immersion cooling

A survey article by Saul Rosen in the first issue of
_ACM Computing Surveys_
mentioned that the original core memory on the IBM-7090 (1958?)
was oil-cooled, although it was quickly replaced
by an air cooled version. As the 7090 used much of the memory
technology from the 7030 Stretch, does this mean that the Stretch
memory was oil-cooled?

The IBM7030, 7090, 1401(early models) all had memories cooled by immersion in
oil. The same is true for the Harvest - ask Norm Hardy about this.

The SAGE machine produced by IBM was the (FS) Q32. It was water cooled.
I remember seeing this machine at SDC in Santa Monica.

Note: from the Cray-2 (immersion) to the Y-MP (chilled) on,
CRI machines have been cooled by Fluorinert (tm) [a 3M product].
Prior that that they were chilled using Freon.
The C-3 (CCC machine) was immersion cooled.
The C-4 (CCC machine) was immersion cooled.

Use of chilled water: many machines (e.g., IBM, CRI, the Japanese machines)
use chilled water for heat transfer. Not immersed.

Added note (ex-DEC)
Chevron International Oil Co. (product taken over by Exxon)
COOLANOL(tm) a silicate ester for dielectric heat transfer for
electronic equipment

"The future is seldom the same as the past" - Seymour Cray, 6/4/95.

Many people claim to have a Cray-1 (on a chip, on a board, for 1/100 the price,
etc.). What does this mean?

Almost nothing.

The Cray-1 is an early circa 1970s:
A distinguished machine for its time, but you might also consider
comparing the ENIAC.
(in fact Alan Perlis in his Epigrams did:
"Just think with VLSI we can have 100 Eniacs on a chip.")

The Cray-1's lessons include vector registers and instructions,
the importance of fast, simple scalar processing (frequently forgotten).

Considering an element-wise break down of the basic features meant:
1) Processor speed, memory size, I/O bandwidth:
12.5 ns clock cycle
Most instructions executed in one cycle (contributing to RISC ideas).

2) 1 (or less) Megaword of memory, that's about 8 MBs (the 1/8 amount
of memory in my Apple PowerBook[tm]). Yep, Cray-1 performance.
This capacity over-simplifies the architecture's multibanked memory.

3) fast I/O: 13.3 MB/S I/O point-to-point wiring,
disk striping (IOS required), etc.
features not usually found on micros.

The original Cray-1 cost $8.6M; sold for $8.8M to NCAR.

This is all a gross generalization/simplification.
Don't compare modern machines to obsolete machines;
if you do that, then compare against the ENIAC like Alan Perlis did
(next section).

How do I get a (used) Cray-1?

Ask Tony Cole.
He sells boards (about $150).

Sorry, no chassie or frame, but you can also buy framed boards at the
National Atomic Museum at Kirtland AFB, ABQ, NM.
$58 at last look. (down from $300, 740 available at this time)
While they last.
It is possible that these boards are export controlled.

What was the Serial Number of Loadstone?

You tell me.
Probably low. What were the serial numbers of Carillon? Or Tractor or
Harvest or your Stretch?

Where can I see a Cray-1?

S/N 1 Boston Sci. Museum
S/N ? The Boston Science Museum (Boston) actually a Cray-1/M
Formerly The Computer Museum (East)
S/N 3 NCAR originally LANL (Fl. orange and black skins)
S/N 4? Norris Bradbury Museum (Los Alamos, NM) # may no longer be visible
S/N 6 Computer History Museum (MV) originally LLNL, actually a 1A
S/N ? ex-LANL
Also S/N 1 CDC 7600

S/N 13 James Curry, Wisconsin [red and tan skins], digibarn (Max Planck), a 1A
Also T90, T3, and other models
S/N 14 NASM: Natl. Air and Space Museum (Smithsonian Inst.) (From NCAR)
S/N 38 http://www.digibarn.com 1A originally LLNL? T. Cole
S/N ? The middle of Dan Lynch's vinyard?
S/N 115/102 National Cryptologic Museum (Ft. Meade) actually Cray-1/M/X-MP
S/N ? on display at MN Supercomputer Ctr -- no idea about this
machine other than that the U of MN owned it and MN
Supercomputer Ctr got its start as a "private" company
through its operations. Write to Liz Stadther
(***@msc.edu) for more details
S/N ? Chippewa Falls Museum of Industry and Technology

For European viewers,
there's a Cray-1 (S/N 26) in the Deutsches Museum in Munich (Tan leather skin).
Ecole Polytechnique Federale de Lausanne (EPFL), Switzerland:
Cray 1/S
Cray 2
Cray X-MP
Cray T3D/256
All these systems were in production at EPFL except the X-MP, which
was originally at CERN, Geneva. They also had a Y-MP (4M?), one of
the first EL (with lots of problems!), and a J90 -- but they didn't
make it to the museum :-(

(Swiss Federal Institute of Technology in Zurich ETH)
(S/N 420 yellow/maroon, 8.5 n/s) Cray X-MP/28
standing in the entrance hall of the RZ building

Heinz Nixdorf Museum

Cray X-MP (?) in the London Science Museum, ground floor, from UK Met Office?
You can see it in the Quicktime guide
of the ground floor of the museum on their WWW site
Next to one of the Apollo command modules.

others? (with time)

Where can I see a Cray-2?

S/N 2101 8-CPUs (The last one) NERSC's now Computer History Museum
Mtn. View, CA
2015 Falcon AFB (NTB) 2 CPU, 128 MW SRAM # 2 - (lobby, crated up)
20xx Falcon AFB (NTB) 4 CPU, 512 MW DRAM # 2 - (lobby, crated up)
S/N 2008 or 2026SGI UK HQ at Theale UK (June 98) # Blue chassis entrance hallway
S/N 2019 Ecole Polytechnique Federale de Lausanne (EPFL) Switzerland
S/N Q[12]? http://www.digibarn.com originally LLNL, Tony Cole
S/N ? James Curry, Wisconsin [blue skins], digibarn
S/N ? James Curry, Wisconsin [red skins], digibarn
S/N ? Chippewa Falls Museum of Industry and Technology

Where can I see a Cray-3?

Classified. (Best not broadcasted)
Computer History Museum (pieces)
Microsoft (pieces)

Where can I see a Cray-4?

Computer History Museum (pieces)

Other models

T-90 S/N ? James Curry, Wisconsin [blue skins], digibarn
T-3D? S/N ? James Curry, Wisconsin [blue skins], digibarn

What is the time line for
ERA (Engineering Research Associates)
Univac -----> Unisys
CDC (Control Data Corporation) -----> CDC => CDS => Syntegra
| -----> ETA (Engineering Technology Associates)
Cray Research Inc. (CRI)
| Acquires (alphabetic not chronological):
| Celerity
| Floating Point Systems
| Supertek
Supercomputer Systems Inc. (SSI [1 of 3]) [S. Chen]
Chen Systems (Pentium based servers) -----> Sequent acqs.
Cray Computer Corp.
assets liquidated
SRC Computer, Inc.
SRC Computer
CRI acquired by
Silicon Graphics, Inc. renamed just to SGI
<- Tera Computer (CRI 1/4)
Cray, Inc.

March 24, 1995

To Our Employees:

I am sorry to tell you our company has run out of money. We have nearly
run out of money so many times before that it is shocking now that it
has really happened. We have been trying to raise 20 million dollars to
carry the company through the rest of the year and bring the Cray-4 to the
marketplace. I believe we chose the best opportunity to raise that money,
but it has not been successful. We must therefore close our operation and
deal with the debts which we owe.

I am very disappointed, as I am sure you are. We have spent six years of
our lives developing a technology which seemed like an important contribution
to science. To not complete such a long effort is very disheartening.
I have asked myself if there were mistakes made, which if done differently,
would have allowed us to complete the project. I do not believe so. I
think the goals were right, and I think we did the very best we could to
accomplish those goals.

Our problem is basically one of timing. The business world, and our
government, are in a cost cutting mode. They do not wish to take any risks
at the moment. Long term investment for the future is not popular.
Many people think there are already too many computers. In a different
decade we would probably have succeeded. So in the sense that we did our
best I cannot feel bad.

I have enjoyed working with each of you and will miss this relationship
very much. Somehow we each have to go home and think about how we get
on with the rest of our lives. I am sure this will not be easy for any
of us. I wish you each well, and thank you for being a part of my life
for the years we have had together.

(signed) Seymour

Cray forms firm for computer designs

Colorado Springs, Colorado -- Cray Research Inc.
and Seymour Cray, the founder of Cray Computer,
have formed a company, SRC Computers Inc., that
will work with a small team on computer designs.
While the company has not received funding and
has notrevealed the type of markets it will pursue,
former Cray Computer chief operating officer Terry Willkom
has joined Seymour Cray and three other
employees in the venture.

August 5, 1996, Electronic Engineering Times

What's "better" 'long-vector' or 'short-vector?'
Are vector register computers parallel computers?

This is an older question from net.arch/comp.arch.

It depends on whether you believe pipelining is a form of
"temporal parallelism."
'Long-vector' machines: TI ASC, CDC 203/205, ETA-10[EGQP]
'Short-vector' machines: Cray, Convex, Alliant, Weitek based chip set
IBM 3090, DEC VAX 9000,
reconfigurable length machines:
Fujitsu, Hitachi, NEC

A highly application dependent question.

I theorized (guessed) in the late 1980s that short vector machines
succeeded because as algorithms transitioned from 2-D code to 3-D codes,
the additional dimension took up the memory organization rather than the
lengthening of any single dimension.

This is clearly a gross generalization, because many communities did
retain the use of long-vectors.

See also other key words: "strip mining."

How to read model and serial numbers on ...
How to phrase the question.....

Once upon a time, life was simple....
A long time ago in a galaxy, far, far away ....
In the beginning Seymour built the 1604.
We just jump in mid-stream.

Small islands of logic exist in a seas of chaos.
Distinction 1: There are two types of numbers
a) Model numbers which MIGHT tell you something about the
configuration of the machine.
These numbers are largely nominal, but sometimes they
have order (ordinal)
Interpreted carefully, it is possible to make inferences
about a model/machine.
Distinction 2:
b) Serial numbers, or production numbers.
Supercomputers appear impressive because they have
fairly small production numbers due to their expense.
S/N #1 Cray-1 sounds impressive.
Interpreted carefully, it is possible to make inferences
about a machine.

The ERA 1604 was the sum of the Univac 1101 and the street address of
the building housing ERA..... [Murray]
The blame is not ERAs, nor Cray's, nor Univac's nor IBM's
[1401, 7090, 709, 360/370/303x/40x1/3090//x/400...]

CDC 6600: how many produced? Where delivered?
See panel 26, Dead Computers

CDC 7600: how many produced? Where delivered?

The original 7700 was delivered to the US Army in July 1974.
It was for the Site Defense Program "SD" for short.
This program IIRC was the follow-up to the Site Defense of Minuteman,
as this program went out of vogue, or some such.
We started the 7700 development for the Site Defense of Minuteman program,
but it was completed at CDC for the Site Defense Program -
for the same Army agency, and prime contractor. What the 7700 was used for
later in life, I do not know.

7600's were built for a number of years. The development programs I was
associated with at CDC were, IIRC the following:
1. The 7400 - designed in 1969, it was never built
2. Transfer of production of 7600's from Chippewa to ARHOPS in 1970, 71.
3. Design of a BDP functional unit for the 7600 - it was never built
4. 7700 Development - 1972-1974- shipped to US Army 7/74
5. SSM 7600 (replacing small core memory with bi-polar RAMS) released to
manufacturing 11/74.
6. LCME 7600 (replacing Large Core Memory with a larger core memory)
Shipped to NSA 3/76
7. Development of the Cyber 176, which was a 7600 with Cyber 170 PPS
including SSM and LCME - first shipment 10/77.

The 7600 or derivatives were replaced with the CYBER 990, which we
shipped to Combustion Engineering in August 1985. IIRC, 7600 or CYBER 176
shipments had all but dried up by then.

Not sure what this adds to the discussion, but may be interesting for
any historians.

Control Data Engineer/Engineering Manager from 1965 to 1992

Cray-1s model numbers for the most part only had improved variants designated
using letters (<nothing>, A, B, S, M, and so forth). Later a 4-decimal digit
scheme saying something about I/O configuration, etc. was introduced.

Cray-1 serial numbers were simple: serial. Sequential. Tallying.
1, 2, 3, 4, ...
More or less as they came off the floor.


1. Serial number X/MPs
Cray-X/MPs were a different type of machine.
The fact that they were multiprocessors, and that processors were expensive
necessitated the specification of the number of processors as a means of
characterizing their configuration.

The first X-MPs were 2 processor machines, and they initially had 2 MWs of
memory. Exercise. Later these scaled to a maximum of 8 MW addressable
memory and 4 CPUs: so what's the designation?
126 machines produced?

The 4-digit scheme characterizing I/O channel, etc. was occasionally used, but
fell into non-use. People concentrated on the "sexiness" of fast CPUs
by this time. This is one reason why on particular old documents you will see
6 and more digit numbers.

X-MP serial numbers were 300s, 3xx.
This became 4nn and 5nn later in the production cycle.

Cray-2s complicated the problem. They were not instruction set compatable,
something which is almost inconceivable in this day. Initially,
Cray-2 were doing to be designed in one model and one model only:
4 processors and 256 MW of memory.
Later, a small number of variants occured: 2 and 8 processors,
128 MW and 512 MW, Dynamic RAM (DRAM) and Static RAM (SRAM).

Cray-2 serial numbers as they left the factory floor were 2000 serials.

2. Binary compatablity

Cray-Y/MPs, which came next, were similar to X/MPs.
Y/MPs form the basis of several families execution compatible :
C-90s, T-90s, ELs, etc.
Binary compatablity extended back one generation only.
You could not run X/MP binaries on a C90. The J90 counted like a YMP.
There were XMP cross compilation libraries for use on the YMP so that
you could compile X/MP programs on a YMP.

So the execution compatability across the range looks like

C1 --> XMP --> YMP --> C90 --> T90
| |
XMS --> ELs --> J90 & J90se
C2 T3d T3e

For binary compatability you can go across one arrow and/or down one arrow
only. (Note ** ) An mpp emulator was available for ELs to develop mpp codes
while waiting for the T3 product line to arrive.
The emulator managed about 4 mpp nodes but used the native CRI arithmetic.
Later T90 cpus used IEEE aritmetic which would not support C90 native codes.

The first Y-MP models were 832 machines:
8 CPUs and 32 MW of memory. We quickly start to lose positional logic
and you have to keep a sense of where the CPU/memory dividing is.
8128s were the next ceiling the first being [S/N 1030].

Y-MP serial numbers started in 1000s.

J90 machines are 9xxx.

ELs are 5xxx.

T3D/T3Es are 6xxx
EPCC T3D is sn6001
T3E-1200 is sn6906
T3E is sn6710

ETA Systems and other followed in similar numbering ways.

Cray T-3[D/E] configurations are ...

List of ETA-10 sites

We do not have serial numbers for installations.
cycle time(E) > cycle time(G)
S = SV
LN2 sites (alphabetic) Config
DWD ETA10-E4128E Deu. Weather
DWD ETA10-G4128E

FSU ETA10-E4128S
FSU ETA10-G4128S

JvNC ETA10-E4128E
JvNC ETA10-E8256E
JvNC ETA10-G8256E

MnSC ETA10-E4128S

TIT ETA10-E8256S

UK Met. ETA10-E4064E

U. Aachen ETA10-G6128E

Count 11, Murray: 7, possible resales.

Air cooled sites Config.
Acad. Sinica, TW ETA10-P108S

Aerosp. Det., Fr. ETA10-Q116S

Aust. Met. ETA10-P108E

Canadian AES ETA10-P108S



CIRA, It. ETA10-P116S

Classified ETA10-Q232S



Ford ETA10-Q108S


Houtex/PGI ETA10-P108S

Meiji U. ETA10-P108S

MUMM, Belg. ETA10-P108S




Pulsonic, Alb., Ca. ETA10-P108S

Purdue ETA10-P108S

Technion U. ETA10-P108S

Total CFP ETA10-P108S


U. Cologne ETA10-P108S

U. GA ETA10-P108E
U. GA ETA10-Q216E [upgrade]

U. W. Ont. ETA10-P108P

Veritas, Alb., Ca. ETA10-P108S

Count 28, Murray: 27. Double counting the upgrade.

The spooks asked for and got an instruction to do "population counts"
(the number of bits per word).

This is a common note/story. You tell us. 8^)

"At one point, I was told this instruction was added to the 6600 at the
request of Los Alamos."

3. bmm
Also Bit matrix multply functional units available as an option on
C90 and T90 cpus.


%A J. E. Thornton
%T Design of a Computer: The CDC 6600
%I Scott, Foresman & Co.
%C Glenview, IL
%D 1970
%K recommended, RISC inspiration,
%K btartar, book, text,
%X The 6600 has influenced a lot of the supercomputers from
Cray and CDC. Also no commercial manufacturer of such
an outstanding machine has ever revealed so much detail.
Amos Omondi
%X Population count: pages 101 and 105, Figure 67 (cascading adds).
#The last annotation came from Gordon Bell's copy.
%X As a book, it's really not very good: it jumps too much
between very detailed things and scants the higher-level
design. It's an engineer's memory dump, and I was
irritated by it, as an account, from the moment I bought it
(about the time of publication).
%X Nevertheless, it is still a fascinating document, ill-organized
as it is. The problem in obtaining permission in advance
is finding someone who is willing to spend a moment thinking
about the question. (Something to be said for a fait accompli).
%X After being granted permission to copy, scan, and make available this
book by Mr. Thornton, who is the current holder of the rights,
a scan is now available at the following location:
A copy of the correspondence pertaining to permissions can be found in
the last two pages of the scan.
Please be considerate to the author and do not abuse this gift.

Gordon Bell memorializes Seymour:

Cray's five Ps:
five Ps:
packaging, plumbing (bits and heat flow), parallelism, programming, and
understanding the problems or applications.

Why .h files?

The name space.
It is important to establish a foothold in the name space.
The collision problem is becoming worse.

A company with a name like Cray (say) publishes *.h files in the public
domain using #defines on certain 8 character sequences. A few non-Cray
software designers in writing tools use #ifdef CRAYs to inflate their egos
(it happens), learn bug reports from people using real Crays,
meanwhile the name space develops some interesting collisions.
Then more software tools appear on your architectures, some of which aren't
bad tools. Keep your tools proprietary, tool development takes longer.
It works, you are reading this on the Usenet, right?

This is why letting the world know your source code is important.

"If it isn't source, it isn't software."
--Dave Tweten "Datamation"

What's "class 6?" (class VI)

The US Dept. of Energy developed a scale of ranking their supercomputer
If someone tells you, "XXX is a class 7 machine" or a class 8 or
higher number machine:
they don't know what they are talking about.
If you think I am trolling, you are right. I want someone to PROVE to
me that a DOE Class VII designation really exists (i.e., not merely
mentioned by someone in a paper).


%A Sidney Fernbach, ed.
%T Supercomputers, Class VI Systems, Hardware and Software
%I North-Holland
%C Amsterdam
%D 1986
%O ISBN 0-444-87981 1
%K book, text, cray, cdc cyber, data flow, NEC SX-2, Fujitsu VP-200,
Hitachi 810/20, vector processing,
%X A collection of papers surveying existing computer architectures
rather than newer proposed supercomputer architectures.
%X A book from one of the men who set up the "Class system" of the DOE.

What's my VAX/IBM PC on these scales?

"Class 1/2." -- Sid Fernbach.

No, in simple terms, they don't rate to be placed on the scale.
It is normalized for the specific time period asked.

"These aren't real computers; they are marijuana." -- George Michael

But this is silly, the PC is more powerful than the Eniac.

So build a time machine and send a VAX/PC back in time,
and you will have the most powerful computer in 1946.

What is the influence of the CDC 6600 and Seymour Cray on RISC Architectures?

The cheapest, fastest, and most reliable components of a computer
are those that aren't there.
--Gordon Bell

"Really Invented by Seymour Cray"

Cray is widely credited as influencing (inspiring)
Hennessy, Patterson, Cocke, their initial designs.
The issue is not simply one of raw performance.
The issues involve design development time and reaching market.

Current best first reference:

%A David A. Patterson
%T Reduced Instruction Set Computers
%J Communications of the ACM
%V 28
%N 1
%D January 1985
%P 8-21
%K trade/popular/business press, industry references, RISC,
%X While not a parallel computer, important for processor design.
%X From the text:
{old arguments for CISCs}
1. Richer instruction sets would simplify compilers.
2. Richer instruction sets would alleviate the software crisis.
2. Richer instruction sets would improve architectural quality.
Memory efficiency was such a dominating concern....
... 70s design principles:
1. The memory technology used for microprograms was growing rapidly,
so large microprograms would add little or nothing to the cost
of the machine.
2. Since microinstructions were much faster than normal machine instructions,
moving software functions to microcode made for faster computers and
more reliable functions.
3. Since execution speed was proportional to program size,
architectural techniques that led to smaller prorgams also led to
faster computers.
4. Registers were old fashioned, and made it hard to build compilers;
stacks or memory-to-memory architectures were
superior execution models. As one architecture researcher put it in
"One's eyebrows should rise whenever a future architecture is
developed with a register-oriented instruction set." Footnote:
Footnote: Myers, G. J. The case against stack-oriented instruction sets.
Comput. Archit. News, 6, 3 (Aug. 1977), 7-10.
* Semiconductor memory was replacing core, ...
* Since it was virtually impossible to remove all mistakes for 400,000 bits
of microcode, control store ROMs were becoming control store RAMs
* Caches has been invented -- ...
* Compilers were subsetting architectures -- ...
1. Virtual memory complications.
2. Limited address space.
3. Swapping in a multiprocess environment.
1. Functions should be kept simple unless there was a very good reason to
do otherwise.
2. Microinstructions should not be faster than simple instructions.
3. Microcode is not magic.
4. Simple decoding and pipelined execution are more important than
program size.
5. Compiler technology should be used to simplify instructions rather
than to generate complex instructions.
1. Operations are register-to-register, with only LOAD and STORE accessing
2. The operations and addressing modes are reduced.
3. Instruction formats are simple and do not cross word boundaries.
4. RISC branches avoid pipeline penalities.
Compiler Technology versus Register Windows
Photos: IBM 801, UC Berkeley RISC II, and Stanford MIPS.
Delayed Loads and Multiple Memory and Register Ports
Multiple Instructions per Word
All RISC machines borrowed good ideas from old machines,
and we hereby pay our respects to a long line of architectural ancestors.
In 1946, before the first digital computer was operational, von Neumann wrote
The really decisive considerations from the present point of view,
the simplicity of the equipment demanded by the code, and
the clarity of its application to the actually important problems
together with the speed of its handling of those problems.
For the last 25 years Seymour Cray has been quietly designing
register-based computers that rely on LOADs and STOREs while using
pipelined execution. James Thornton, one of his colleagues on the CDC-6600,
wrote in 1963
The simplicity of all instructions allows quick and simple evaluation
of status to begin execution.... Adding complication to a
special operation, therefore, degrades all the others.
and also
In my mind, the greatest potential for improvement is with
the internal methods. . .at the risk of loss of fringe operations.
The work to be done is really engineering work, pure X and simple.
As a matter of fact, that's what the results should be --
pure and simple. footnote:
Thornton, J.E. Considerations in Computer Design -- Leading Up to
the Control Data 6600. Control Data Chippewa Laboratory, 1963.
%X Its not that Cocke wasn't doing this, its just that if you look
at the instructions sets historically, Cray was doing a lot of the
things that were later found in RISCs.
%X I heard later that CDC inspired one of the early IBM projects that
eventually to RISCs and then Superscalar, so there might be some
other link there Dave

My guiding principle was simplicity.
I think there is an expression for that.
Don't put anything in that isn't necessary.
Whereas many other places at that point in time and
for several years after that were adding
all the bells and whistles that could be imagined.
Later on much more recently there came the term "RISC" which says
"back to the basics", make it as simple as you can. I thought I was
a RISC person all the time even though I didn't know the name.
--Seymour Cray

The early RISC pioneers were not without their critics.
One of the most prominent critics is Nick Tredennick, whose work
on the microcoded 68000 and IBM S/370 NMOS experimental chip is
in stark contrast to RISC philosophy. Tredennick's has argued
that the benefits of RISC design are illusionary and whatever
advantages RISCs enjoy are attributable to
1) newer designs carry less compatibility baggage, and
2) RISC processors tend to enjoy higher bandwidth to memory than
comparable CISC designs.

%A J. E. Thornton
%T Design of a Computer: The CDC 6600
%I Scott, Foresman & Co.
%C Glenview, IL
%D 1970
%K recommended,
%K btartar
%X The 6600 has influenced a lot of the supercomputers from
Cray and CDC. Also no commercial manufacturer of such
an outstanding machine has ever revealed so much detail.
Amos Omondi
%X Population count: pages 101 and 105, Figure 67 (cascading adds).

# this is a redundant reference on this panel.

Look up references on the IBM 801 (Cocke), SOAR (Patterson), MIPS (Hennessy).

"We need more bodies," said West ...
...around this time videotape was circulating in the basement, and it
suggested another approach. In the movie, an engineer named Seymour Cray
described how is little company, located in Chippewa Falls, Wisconsin,
had come to build what are generally acknowledged to be the
fastest computers in the world, the quintessential number-crunchers.
Cray was a legend in computers, and in the movie Cray said that he liked
to hire inexperienced engineers right out of school, because they do not
usually know what's supposed to be impossible. West liked that idea.
"Shall we hire kids,...?" said West.
Tracy Kidder, The Soul of a New Machine, 1981

Some modern computers, most notably the machines of Seymour Cray,
remain hardwired, they respond directly to the electrical equivalent
of assembly language.
Tracy Kidder, The Soul of a New Machine, 1981

How many instructions were in the CDC instruction set?

Ask Horst Simon 8^).

What constitutes 'balance?'

An interesting vague question.

A useful analogy comes from the icon of the Salishan Conference
(remember that from another panel?):

^ A
s / \ r
m / \ c
h / \ h
t / \ i
i / \ t
r / \ e
o / \ c
g / \ t
l / \ u
A / \ r
<_____________________> e

since updated replacing Language.

I might take issue with aspects of this model, but it's more useful to
consider other ideas based on it:

e / . \ S
r / . \ o
a / . \ f
w / . \ t
d / . \ w
r / . \ a
a / . . \ r
H / . . \ e
/ . . \

/ . \
e / . O\ S
/ . p\
r /n .C e\ o
/o C.o r\
a /i .m a\ f
/t P.p t\
w /a .i i\ t
/c U.l n\
d /i .e g\ w
/n .r \
r /u . S\ a
/m y.L y\
a /m r. N .i s\ r
/o o. a o .b t\
H /C m. t t .r e\ e
/ e. a a .a m\
/ M. D t .r \
/ . D a t a i . \
A p p l i c a t i o n

/ . \
e / . \ S
/ . \
r /n . \ o
/o C. \
a /i .O L\ f
/t P. i\
w /a .S b\ t
/c U. r\
d /i . a\ w
/n . r\
r /u . i\ a
/m y.C e\
a /m r. A .o s\ r
/o o. a l .m \
H /C m. t g .p \ e
/ e. a o .i \
/ M. D r .l \
/ . N o t a t i o n i .r\
A p p l i c a t i o n

I prefer the reduction.
George has gone three-dimensional. I have a 3-D model (since I used
to work with sheet metal).

Where are the supercomputers?

Two separate lists are compiled. Their existence is periodically posted.
The reader must realize the competitive and secretive nature of some
of this market, politics, and commerce. All such lists must be regarded with
some suspicion. Manufacturers have their ax (keep customers), users (like
big industrial or government concerns) have their ax, etc. Any list is
suspect. Lists like these and announcements could also be used for
disinformation. See "Why is this group so quiet?"
So don't expect reliable stats without first signing a non-disclosure agreement.

(Gunder Ahrendt)

Then there is the "German" list aka TOP500 list.

I would be willing to make my WWW list of supercomputing
and parallel sites into an official comp.parallel/
comp.sys.super page, still maintained by me.

Currently, on a slow week, this page gets about 600-1000 accesses.
IEEE URL: http://computer.org/parascope/

WWW List of Parallel & Supercomputing Sites and Vendors


This WWW page is updated regularly and features
links to World-Wide Parallel & Supercomputing
*-* Research Sites (Academic, Government, and International)
*-* Vendors
*-* Related supercomputing information

There is a quick-index to the Sites, as well as a
reverse-chronological listing of new updates.

The Cray-2 was a Failure wasn't it?

That depends with whom you are speaking. It was enough to wipe out ETA Systems.
Important advances:
1) Pushed "big memories." The first Cray-2 had more physical memory
on it at the time than all previous Cray architectures combined.
2) First supercomputer to run Unix(tm). Won the Unix wars.
Wiped out non-competiting OSes and spawned a slew of

Technology pushes:
Dropping B- and T-registers for a 16 KW "local" memory
not quite a cache. Seymour didn't know how to use caches properly?
(later B- & T-regs readded back in Cray-4, LM removed)
New Fortran (cft77) compiler written in Pascal.

Single CPU to memory data path.
Two cycles per instruction.
Intended to use GaAs (pushed to Cray-3). Slower memory.

List of known Cray-2 sites (all decommissioned)
about 3 dozen installations (number indistinct due to upgrades and resales)

External CRI:

SN First Config Notes
2 Q[12]? http://www.digibarn.com originally LLNL, Tony Cole
1 CPU, 64 MW DRAM # red skin, for sale, Tony Cole
2 Q3 later MnSC
2001 LLNL 4 CPU, 64 MW DRAM # red skin, NSERC
later MnSC
2002 Ames 4 CPU, 256 MW DRAM # technically the first real one
# blue skin
2003 CRI->MnSC->MIT 4 CPU, 256 MW DRAM # maroon and gold
2004 classified 4 CPU, 256-MW DRAM # red skin
2005 classified 4 CPU, 256-MW DRAM #
2006 U. of Stuttgart 4 CPU, 256 MW # yellow and black
2007 CCVR 4 CPU, 256-MW DRAM #
2008 Harwell 4 CPU, 256-MW DRAM #
2009 ARL (BRL) 4 CPU, 256-MW DRAM # Mike Muuss' machine
2010 NTT 4 CPU, 256 MW SRAM #
2011 AFSCC 4 CPU, 128 MW SRAM # AFWL
2012 CRI->DKRZ 4 CPU, 256 MW D/SRAM # rainbow skins (5 colors max)
# last of the columns, in comes the waterfall
2013 Ames 4 CPU, 256 MW SRAM # blue skin
2014 TACOM 4 CPU, 256 MW DRAM #
2015 Falcon AFB (NTB) 2 CPU, 128 MW SRAM # 2 - (lobby, crated up)
20xx Falcon AFB (NTB) 4 CPU, 512 MW DRAM # 2 - (lobby, crated up)
2016 classified 4 CPU, 256-MW DRAM #
2017 KIST/SERI 4 CPU, 128-MW DRAM #
2018 NERSC # 4 cpu, back to CCC/CRI
2019 EPFL (Lausanne, .ch)4 CPU, 128-MW DRAM #
2020 NCSA (UIUC) 1988-95 4 CPU, 256 MW # blue skin
2021 MnSC -99 4 CPU, 512 MW (1/3) # blue/black? skin
2022 Aramco 2 CPU, 256 MW DRAM #
2023 LaRC 4 CPU, 128 MW SRAM #
2024 CRI #
2025 CCC #
2026 RI->RAE->SGI 2CPU/128MWS -> 4CPU/256MWD #
2027 ?
2028 CNRM (Centre National de Recherches Meteorogiques) Toulouse # ?
4 CPU, 256 MW SRAM?
2029 Eli Lilly 2 CPU, 128 MW # ?
20xx CEA-CGCV 4 CPU, 256 MW DRAM # (.fr)
20xx DIRMET 4 CPU, 256 MW DRAM # (.fr)
2101 CCC 8-CPUs (The last one) -> NERSC? # Red and black
# only 8 cpu (at TCMHC, Moffett Field)
2951 CRI 1 CPU, 16 MW DRAM? #
MN Supercomputer Ctr # 4 cpu and 512 MW

What was the physicaly smallest Cray machine?

The EL92 was a repackaged version of the air cooled EL range that measured
approx 1.2m High by 0.6m wide by 0.6m deep and could run from a normal power
outlet. Whilst not at big commercial success, it was a bit late to market,
it was wildy used at trade shows and as loan equipment.
Available in 2 and 4 cpu versions with 512Mb memory it was truly
a deskside Cray.
Good write up and pictorial in "Advanced systems" magazine July 1994.

# actually, this is not quite true. --enm

What was the physicaly largest Cray machine?

The 16 cpu version of the C90 was a truly big machine standing at 2.5M tall and
cpus just fitting in a 4m diameter circle together with the power and cooling
equipment the whole system weighed in at approx 12 (?) tonnes.
There was one (or more?) system delivered that consisted of
four interconnected 16 cpu C90s.

Where is the Cray-1 list? What models?
About 66 frames (A, B, S, M).

Where is the Cray-X-MP list?

About 150? frames.

Where is the Cray-Y-MP list?

The original ... say something here.....

SN414 (red and black)

How many were built?
By PC and mainframes standards: not many.
If not the X, then certainly the Y.

X-MPs were serial numbered in the 300s.

Original 1s, 1As, 1Ses, 1Ms (later renumbered as X-MP/1s).
We had a post at one time which detailed differences.
I am a post 1S baby, someone else has to detail the minutae.
For instance, I think Ames' first Cray was a Cray-1S/1300S.
The 3 was the number of I/O controllers, I think.
So if you know, you can elaborate here:
1S: bipolar static RAMs?
1M: These had CMOS memories, hence the M.
All limited to 8 MW addressing. Higher addresses only came to Y-MPs.

X-MPs started the number of CPUs amount of memory numbering.
The first X-MP was a 22: 2 CPUs and 2 MW of memory (COS).
This was the first CRI multiprocessor.

The 1S-1300 precursor was S/N 38.
We had two X-MPs. The second being 313, so I have to deduce that the other
The 22 was upgrade to a 48. Reader exercise. So did the S/N change?
Got me. This was all in the era before machine DNS naming.
Most people used configuration naming (22 vs. 12 vs. 48)
or like LLNL and LANL, A, B, C. Not many sites had multiple Crays.

It is much harder to maintain a list of Cray-1 derived machines, because
1) there were many more of them (models and ),
2) they were frequently resold,
3) older records are harder to find,

But if some one wants to do it, here's the space:

S/N Model Owner path
14 1A SDC AFWL MFECC Smithsonian

Where is a current list of sites?

That's SGI/CRI's concern. Ask them.
You can also look at Gunter's list and the "TOP500" list

Date: Wed, 15 Mar 1995 09:26:56 -0700
From: Jim Davies <***@Craycos.COM>
Message-Id: <***@tnt.Craycos.COM>
Subject: Re: cray
|We have a Cray-2 prototype here called snq2 (q for quadrant),
snq2 isn't the thing that Newt Perdue had a photo of in the aquarium?
I don't really know; I've only worked here, not at CRI. I think
there was a snq1 also, which would have been the first prototype.
S7? Not a serial number is it?
Sort of; it's the "seventh" Cray-3 tank we built. Numbers S1 through S4
are one-octant tanks, S5 and S6 are two-octant tanks (S5 is the one
at NCAR which they call "greywolf"), and S7 is the only four-octant tank.
The module sets have tended to roam freely between tanks as needed
(e.g. S6 hasn't ever spent any long periods in production because it
gets used to test spare and replacement modules for S5 and S7).
S4 has been converted to a 2-octant tank for use as the PIM system.
S1 through S3 are essentially gone at this point -- replaced by Cray-4 tanks.

Yes. They're not quite like the old T registers, in that they have
data paths to the A, S, and V regs. We're using them for argument
passing, stack pointers, scalar register spills, scratch space for
vector reductions, etc. (i.e. the same sort of things local memory
was used for, except no vector register spills). There are no direct
move instructions between A and S registers, so they're used for
those moves also (and we still have a word-addressable architecture
so we need to do character pointer manipulation in S registers,
shifting and masking to convert char pointers to word pointers, etc.).

My feeling about local memory was that it was never large enough.
In fact, Seymour's initial Cray-4 design had a larger local memory
(128K or 256K, one or two extra modules depending on user's needs).
He was convinced by our benchmarkers that an extra memory port would
be more useful, in addition to reducing the processors to one module apiece.
It seemed logical, since the goal of caching is to ease the memory
bottleneck, and the second memory port also helps with this (while
making the machine smaller).

Arguably one of our problems with local memory was lack of adequate
compiler technology to use it; to properly use it as a programmable cache
really requires looking at entire loop nests rather than the cray-traditional
inner-loop-only vectorization scheme. We're working on loop-nest optimizations
now, since even the vector registers may be used this way in many cases.
Also, which port gets used for a particular load or store is completely
under the programmer's control, so the compilers have to make some tougher
optimization choices in this regard also.
Every one else wants CCC to do well. I suggested to a friend at IBM TJW to
again submit a PR to get a C-4.
Thank you. Our problem, as always, is making the machine work. It's
always seemed that once we can produce a reliable fast system the
marketting should be the easy part (although the Cray-3 was so late
that it wasn't true for that machine).

Take care,

-- Jim

Cray Electronics holdings /Cray Communications /Cray Systems
will be changing their names to Anite systems etc.

There has always some confusion between these companies and the
unrelated Cray Research (a Silicon Graphics company )


Message-Id: <m0v6pol-***@rmii.com>
Date: Fri, 27 Sep 96 21:11 MDT
From: ***@rmi.net (Stephen O Gombosi)

I don't understand what all the fuss is about. The definitions are really
quite simple (yes, Gene, I want credit for these):

The Devil's Supercomputing Dictionary - A Guide To Vendorspeak
For The Unwary And The Perplexed
Stephen O. Gombosi (with apologies to Ambrose Bierce)

Supercomputer - What *I* am selling today

Dinosaur - 1) What *they* are selling
2) What you already have that I am trying to replace with my
"supercomputer", even if it is something that I personally
told you was a "supercomputer" when I sold it to you yesterday.

Good code - Code that runs well on a "supercomputer" and badly on
a "dinosaur"

Bad code - Code that runs well on a "dinosaur" and badly on
a "supercomputer".

Industry standard benchmark - A whole bunch of "good code"

Unrepresentative code fragments whose performance is irrelevant - A whole
bunch of
"bad code"

Fair and open procurement - Anything that results in the sale of a

Free and fair trade - Anything that results in the sale of *lots* of

Grand challenge problem - Any problem, however trivial, which can be solved
*only* with "good code"

Dusty deck - "Bad code" which I cannot figure out how to replace with
"good code"


Article 6395 of comp.sys.super:
From: ***@dip.eecs.umich.edu (Theodore Tabe)
Newsgroups: comp.sys.super
Subject: Seymour Cray Dies
Date: 5 Oct 1996 20:12:46 GMT
Organization: University of Michigan EECS
Message-ID: <536ffu$***@news.eecs.umich.edu>

High-Tech Legend Seymour Cray Dies

COLORADO SPRINGS, Colo. (Reuter) - Seymour Cray, known as the father of the
supercomputer, died early Saturday nearly two weeks after suffering
serious injuries in a car accident, a hospital spokeswoman said. He was 70.

``Seymour Cray died at 2:53 a.m. (4:53 a.m. EDT). The cause of death was
complications from massive head injuries,'' said Kate Brewster,
spokeswoman at Penrose Community Hospital.

Cray had been in the hospital since Sept. 22 when his Jeep Cherokee was
hit by another car on Interstate-25 in Colorado Springs.

Cray is credited with developing the first fully transistorized
supercomputer in 1958,
and after he formed his own company bearing his name in the 1970s, his
name became synonymous with cutting-edge technology.

In 1957 with Bill Norris he started Control Corp., and then founded Cray
Research in Minnesota in 1972. Many of Cray's supercomputers --
large scientific machines that
can process large amounts of data at great speeds -- were used by the U.S.
government, including the military.

Earlier this year, Cray Research was sold to Silicon Graphics Inc.

Cray then established Cray Computer Corp. in Colorado Springs, which was
separate from the first Cray company and which filed for bankruptcy in 1995
after it failed to attract some $20 million it needed from investors.

The development of the personal computer that delivered high power right
to the desk
of scientists and engineers and slimmer defense budgets spelled the end
for the supercomputer, whose cost can run as high as $30 million.

Reut11:16 10-05-96

(05 Oct 1996 11:16 EDT)


Article 6413 of comp.sys.super:
From: ***@flowbee.interaccess.com (Tom Johnson)
Newsgroups: comp.sys.super
Subject: Seymour Cray
Date: 8 Oct 1996 11:48:41 -0500
Organization: InterAcces, Chicagolands best Internet Provider
Message-ID: <53e0l9$***@flowbee.interaccess.com>

The following is from "The Computer Establishment" by Katherine Davis
Fishman, copyright 1981 by Harper and Row. Transcribed without their
permission, but hey, I really do recommend this book!

The prospectus of Control Data stated that the company's principal
initial business would be research and development for the military,
and that the company would also get into components and
computer accessories; it did not intend to compete head-on with
IBM and the other giants of the industry. But among the former
ERA [ Engineering Research Associates] men whom [Willliam] Norris
wooed away from Univac was a talented engineer named Seymour Cray,
who convinced Norris that a poweful, relatively inexpensive
solid-state computer built from printed circuit modules would prove highly
profitable. You could sell it to sophisticated customers -- the
Department of Defense, the aircraft companies, the universities --
who did not require a heavy investment in marketing and support; they
knew quality when they saw it, and preferred to do their own
programming. Development of Cray's 1604 computer was a costly program for a
small company to undertake, and Norris had just spent half a million
dollars on his first aquisition, a production engineering firm. Still,
he took the gamble, and the salaries of CDC employees were cut in half
while the 1604 was in progress.

[ There is a photo in the book with the caption: "Control Data's
first computer leaves the shipping dock. From left to right: William
Norris, Frank Mullaney, George Hanson and a representative of North
American Van Lines. The system arrived at the U.S. Naval Postgraduate
School in Monterey, California on January 12, 1960." The two crates
pictured beside the moving van are clearly labeled ' 1604 '. Mr. Cray
is nowhere to be seen. ]

[ Continuing on page 202 ]:
The 1604 computer's success in the scientific community -- CDC
reported its first profits after less than two years of operating, an
extraordinary record for a mainframe company -- gave Cray a
reputation for genius, a fame nourished not only by further design
successes but by the man's peculiar, reclusive personality.
As the company grew Cray began to complain that it wasn't any fun any more:
administrative and ceremonial duties were getting him down. He
had a single-minded ambition to design the largest computers in the
world, and about these machines he could talk eleoquently; on any
other subject he was silent. What Cray wanted was to work in some
quiet woodsy place like his hometown, Chippewa Falls, Wisconsin,
where he owned some land. As for Norris, he was betting on Cray,
who was clearly a man worth coddling. He built a lab on Cray's
land, and Cray became known as the Hermit of Chippewa Falls.
Norris visited the lab twice a year by appointment and Cray came to
headquarters every six weeks or so. The rest of the time he worked
far into the night with his soldering iron. When one of Norris's aides
brought some professors, prestige customers, out to the lab to see
The Hermit, Cray gave an illuminating talk on his current project
and then, in honor of the occasion, took the visitors out to the local
diner -- an almost unheard-of mark of favor since Cray usually
brought his lunch in a metal pail. But after finishing his hot dog --
with dispatch -- Cray arose, said he'd better be getting back to work,
wished the professors a pleasant trip and walked out. There was,
perhaps, a certain showmanship in such a performance, but if so, it
didn't hurt the company.

{ from pages 202-3 of "The Computer Establishment }

BTW, I wrote my very first programs in FORTRAN and assembler ('CODAP')
on a CDC 1604 at the University of Wisconsin, Madison in 1966-7.

Article 6414 of comp.sys.super:
From: ***@hikimi.cray.com (Roger Glover) # defunct address
Newsgroups: comp.sys.super
Subject: Re: CraySuper vs. Pentium Pro
Date: 8 Oct 1996 15:26:37 GMT
Organization: Cray Research, Inc.
A Cray C94A would be equivalent to how many Pentium Pro 200's?
In one sense of meaning this can be answered straightforwardly; in
another sense this is almost impossible to gauge accurately.

The first meaning occurs if we rephrase the question as follows:
How many times could a CRAY C94A complete a unit of work
before the Pentium Pro 200 completes it once?
Then the procedure is straightforward: time one, time the other,
divide. This measure gives us a sense of the relative "capacities"
of the two machines. As long as we can agree on what unit of work to
time there is no problem.

The second meaning occurs if we rephrase the question as:
How many Pentium Pro 200s working in concert would it take
to complete a unit of work as fast a CRAY C94A completes that
unit of work?
The difficulty here is that, even if we agree on the unit of work, we
then have to agree how to design that parallel array of Pentium Pro
200s, how to estimate the granularity of the work distribution, and
so on. All these additional complications have to do with how well
the unit of work will "scale" across an array of Pentium Pro 200s.

Beyond that, it is much more difficult to agree on a fair unit of
work. For example, if the unit of work does not have sufficient
intrinsic parallelism, an infinity array of Pentium Pro 200s
communicating instantaneously would not be as fast as a CRAY C94A (or
any other significantly faster system). The guiding principle here
is called "Amdahl's Law"; it is a straightforward corollary to the
"Law of Diminishing Returns." Check for it in the FAQ the next time
that part of the FAQ rolls around.
Does anyone have a MIPS rating on the C94A and the Pentium Pro 200?
C94A: Yes, PP200: Not me.
Would it be proper to compare using MIPS?
ABSOLUTELY NOT! As a code closes in on peak performance on a CRAY
C94A, the MIPS rate actually goes **DOWN**. This is because peak
efficiency occurs for vector work, and one vector instruction can
start as many as 128 operations on a CRAY C90 processor. I have
heard MIPS referred to as:
- Meaningless Indicator of Performance Statistic
- Makes Idiots Purchase Shtuff (or whatever)
- Marketing Is Pushing Something
MIPS can be meaningful for comparing systems with the same general
architecture, but for comparisons between processors as different as
Cray and Intel, MIPS is worse than useless; it is actively
misleading. Go to comp.arch for more about this.

For my money, the best measure of performance between heterogeneous
systems is CPU-memory bandwidth. And the best measure of bandwidth
of which I am aware is John McAlpine's "Streams" benchmark suite. By
that measure, based on the result data found at URL:
the CRAY C94A is (in the first sense of meaning):
* *
* 197 to 209 *
* *
times faster than the Pentium Pro 200.

Date: Tue, 8 Oct 1996 14:51:02 -0400 (EDT)
Message-Id: <***@eve.umiacs.UMD.EDU>
From: "David A. Bader" <***@umiacs.UMD.EDU>
Subject: Cray, as told by Markoff

This is great!
Date: Mon, 7 Oct 1996 14:15:01 -0700
Reply-To: "CYHIST Community Memory: Discussion list on the History of
Subject: IP: Re: a marvelous obit re: Seymore Cray by John Markoff
Community Memory: Discussion List on the History of Cyberspace
I have permission from John Markhoff of the NY Times to forward the
following story to the CYHIST list. Markhoff had shared it with Dave
Farber, who posted it to his IP list. This is a brief annecdote that did
not make it into Markhoff's obit on Cray.
Thanks very much. As occasionally happens sometimes the anecdotes I
value most don't make it into the paper. This is a story John Rollwagen
told me. You're welcome to share it with your list.
Mr. Cray would do much of his computer design work on a fresh pad
of engineering paper, frequently going through an entire pad in a day.
There have been many legends that have grown up around Mr. Cray's
reclusive work habits which frequently went late into the night.
Mr. Rollwagen recounts one story of a customer who visited Mr.
Cray's home in Chippewa Falls. When the man asked, 'what were the secrets
of his success, Mr. Cray said, "Well, we have elves here and they help me."
When the visitor, who was a French scientist, expressed his
astonishment, Mr. Cray took him to look at a tunnel that he had dug under
his home. Shored up with four by four cedar logs, the tunnel appeared to go
in random directions, at one point going straight up into Mr. Cray's lawn.
(Mr. Cray later explained to Mr. Rollwagen that the tunnel had gone
straight up because one day it had collapsed while he was digging it and a
tree in his front yard had fallen into the tunnel.)
Mr. Cray explained to his visitor that he would work at his home on
computer design problems for three hours at a stretch. When he reached a
technical stumbling block, he would then retire to the tunnel where he
would dig for an hour.
"While I'm digging in the tunnel the elves will often come to me
with solutions to my problems," he said.
601 Van Ness Ave., No. 631 San Francisco, CA 94102
TELEPHONE: (415) 775-8674 FAX: (415) 673-3813
* * * WEB: http://www.netaction.org * * *
Moderator: Community Memory
Materials may be reposted in their *entirety* for non-commercial use.
Get this list in digest form: SET CYHIST DIGEST
Leave this list: SIGNOFF CYHIST
Date: Tue, 8 Oct 1996 14:47:36 -0700 (PDT)
From: ciotti (Bob)
Message-Id: <***@laika.nas.nasa.gov>
To: unicos-***@cugsrv1.cug.org
Subject: Re: Seymour Cray
Subject: Seymour's Services
Date: Tue, 8 Oct 1996 16:08:32 -0400
SEYMOUR R. CRAY, 71, a world-renowned designer of supercomputers, died Oct.
5, 1996.
Memorial services will be held in Chippewa Falls, Wis., and Colorado Springs,
CO. Seymour's family would like to have a "Celbration of Life" with
Seymour's friends. The "Celbration of Life" will be held in Chippewa Falls
at the Cray Research Inc. River Side Building at 2:00PM on Oct 12; and in
Colorado Springs at the Red Lion Inn at 2:00PM on Oct 18.
Mr. Cray was born Sept. 28, 1925, in Chippewa Falls to Seymour and Lillian
(Scholer) Cray. He was married Oct. 8, 1980, to Geri Harrand, who lives in
Colorado Springs.
He is survived by a son, Steven of Chippewa Falls; two daughters, Susan
Borman of Eau Claire, Wis., and Carolyn Arnold of Minneapolis; a sister,
Carol Kersten of Rochester, Minn.; and five grandchildren.
Mr. Cray had served in the Army during World War II. He completed graduate
school at the University of Minnesota in Minneapolis. In 1957 he left Univac
to form Control Data Corporation. In 1972, he founded Cray Research Inc.,
where he had developed the world's fastest super-computers used for research,
weather forecasting, oil exploration and nuclear energy. He had lived in
Colorado Springs for eight years.
Memorial contributions may be made to the Pikes Peak Area Trails Coalition,
1426 N. Hancock, Suite 4, Colorado Springs 80903; or to the Seymour R. Cray
Memorial, University of Minnesota, 1300 S. Second St., Suite 200; Minneapolis
the initial report of the automobile accident --

a story on the dangerous intersection at which the accident occurred --

report of seymour's condition stabilizing --

report of seymour's death --

obituaries including seymour's --

report of "celebration of life" planned for 18 October --

Here is also a URL for the transcript of a Video History Interview
with Seymour by David Allison from the Smithsonian Institution's
National Museum of American History on May 9th, 1995.


Speed has always been important otherwise one wouldn't need the computer.
--Seymour Cray

From a purely economic/business mode, it is really hard to see how
any of the "big" supercomputing startups could possibly have survived.

Consider the numbers:
ETA spent ~$400 Million
SSI(1) spent >$250 Million
CCC spent >$200 Million

Right now, the worldwide market for computers costing $5 Million
and up is about $680 Million per year.

The life expectancy of a supercomputer design is about 4 years.

Even assuming really fat manufacturing margins, one would be
very hard pressed to apply more than 50% of gross income to
retiring the engineering/development costs.

The product is going to begin with 0% market share, and by
four years later is going to again have near 0% market share
as it is eclipsed by competitor's products and subsequent
generations from the original vendor.

So what sort of market share would be required to pay off
the investments incurred by these startups?

The numbers show that you would have to *average* 20% market share for
a four-year period, *and* be able to apply >50% of gross income to
retiring the initial investment (or rolling over to the next

Given the inevitable cycle of desirability of a product, you would
probably have to capture 40% market share at your peak to do this.

This is approximately the share of the high-end market that is held by
SGI+Cray, and is about 3x larger than the next largest entry.

I would certainly not want to bet my money on *anyone* being able to
do that in the current era.

The only way to succeed is to do the initial development for very
little money, and/or arrange financing that does not need to be
paid back....


Simulation and modeling problems are used to study physical systems.
As they are refined, they tend to become more complicated. and require
very large memories . One thing that generally stays the same is the
desire to run these problems in the shortest time possible. When such
problems are run on that computer which does it the fastest, we say we
are doing Supercomputing. This shows the concept of Supercomputing
from the point of view of an application.

As an example of such problem growth one fluid flow simulation needed
about 100 floating point operations per mass point when it was first
developed over twenty years ago. This the calculation has been
continuously refined by adding improved approximations of the physics
and improving the fineness of detail by increasing the number of points
in the application space, and currently requires over forty thousand
floating point operations per mass point, and the number of mass points
has grown by a factor of over ten thousand. When the problem was first
developed it was represented as about a dozen dynamical quantities per
mass point. The current version maintains over two hundred such
quantities per mass point. During the course of a problem run, the
calculation is carried out at every mass point every cycle. All this
improvement notwithstanding, the calculation takes less time today that
when it was begun, The amount of time needed in the beginning was
about 20 to 40 hours for as big a problem as then
seemed worthwhile doing. It is the same today.

This is the effect of running the program on a Supercomputer. In the
process enormous quantities of data are generated and of course, much
of this must be saved for a variety of reasons. Obviously, very large
and as fast-as-possible storage devices are needed. Thus
Supercomputing is characterized by any combination of a big
computational burden, a big memory requirement, big I/O and storage
requirements and shortest time of execution.

It seems most useful to consider the question of software. Most
commentators usually skirt this problem. Superficially, the software
used for small computers is roughly the same as for that used in the
Supercomputers with certain exceptions:

1. Software developed on small or otherwise inadequate computers
usually shows all sorts of inadequacies, such as too small tables, or
other installation and memory limitations

2. Software that originates on small computers is generally not
robust; nor easily expendable.

Please understand that these are observations not criticisms.
Supercomputer applications equally need good compilers, editors,
debuggers, file managers and so on. But the applications are typically
so big that the support software must be very commodious and robust,
and such things don't usually come from software developed on small
machines. There are no doubt, a few exceptions, but it is a matter of
fact that the highly vaunted time sharing systems that came from
MULTICS generally were not able to manage the extreme sizes of most
Supercomputer applications. And so forth.

The first commercially available computer in this country, generally
thought to be the UNIVAC 1 (UNIVersal Automatic Calculator) in 1945 or
so, owes its existence in no small way to the ideas of John von
Neumann, John Mauchly, and J, Presper Eckert, and the support of the
Franklin Life Insurance Company of Philadelphia - a business! {
Considering the experiences since then, this is very surprising;
businessmen generally take, not give.} By the time the UNIVAC was
deliverable, the Eckert-Mauchly Company had been acquired by the Sperry
Remmington Rand Company. The first scientific problems run on this
machine were related to the study of nuclear energy and weather
prediction. Both areas depended on Hydrodynamic calculations.

In the early days of computing, hardware that was developed for
(Super) Computer usage always found its way into the smaller, and lower
cost computers that everyone "knew" would be the basis for future
man-machine interactions. It is worth noting that today component
innovation is something that relies almost exclusively, on the
mini/micro computer industries, and we hear things like the micros
will kill Supercomputers. Without belaboring these trends, I can only
say that they both seem wrong-headed and unfortunate.

Once such computers became available, three government agencies were
most responsible for the design and construction of big computers,
though at first they were not known as Supercomputers. In fact, these
agencies weren't formally chartered until sometime later, but for the
purposes of this response, I will consider their influences as
beginning in the 40s. The agencies were the Atomic Energy Commission (AEC),
the National Security Agency (NSA), and the
National Aeronautical and Space Administration (NASA). Although at the worker
level there was practically no coordination between these agencies,
each stressed computing features that nicely fitted together.
Technically, the following is rather an oversimplification, since each
agency strove for total system-level improvements. Notwithstanding,
the features listed here were particularly important.

All three agencies needed very large memories The AEC needed
high speed floating point arithmetic The NSA needed high speed
fixed-point arithmetic and input/output The NASA needed very
flexible input/output.

I suspect that very few people have ever written a program that needed
to be run on a Supercomputer, so it might be helpful to examine some of
the characteristics of such programs. To anticipate a chorus of
exceptions, I will say just once, "Not all programs exemplify all the
following characteristics." When you read what's below and wish to
object, refer to the foregoing sentence.

Generally the control flow in the program is rather straightforward.
The three things most needed in a Supercomputer are:
fast arithmetic,
large memories,
equally fast I/O, and
a usable, robust operating environment.
{So I can't count ... either.} A typical problem may
require between 10^14 and 10^17 arithmetic operations; clearly, the
floating point unit has to be big and fast, and so does the memory
(typically each floating point operation is responsible for 24 bytes
of memory traffic.) * Very large memories are needed to
accommodate the billions of WORDS needed for each data set, and the
memory traffic on average has to move 3 words (24 bytes: 16 in and 8
out) per floating point operation..
It is typical that every data point is re-computed every cycle and
it is usual that on each cycle each point requires tens of thousands
of arithmetical operations. Often there is an enormous gush of
output. Number ranges are often too big for anything other than
floating point. Notwithstanding, there is always a problem of
retaining some numerical significance so multiple precision
arithmetic is vital. Perhaps you can guess that there is just a
limited number of genuine Supercomputing applications, but even this
number shrinks dramatically when we try to see who is willing to pay
for developing a given problem.

Given that there is a problem that needs to be run on a Supercomputer,
it should be obvious that it won't run constantly, so hundreds or
thousands of smaller problems can be run during such gaps. A common
mistake among certain critics is to say that this is a waste of
Supercomputer resources, or that it is simply not cost-effective. It
should be clear from the foregoing that the criticism is simply
wrong-headed. Supercomputing is not defined by these little problems,
but they can and do benefit enormously by being run on a big machines.
A little problem on a big machine is easily managed, but it becomes a
big problem on a little machine, So, instead of wasting a lot of time
trying to figure our how to fit it into a small or otherwise inadequate
machine, the developer is free to concentrate on making the problem do
what he wanted it to do. (I admit that some people like to squeeze
every last twitch out of a computer. To others, such behavior is
unseemingly for real gentlemen.)

The choice of the price of a Supercomputer is not clearly useful. To
see this, we review some (cynical) definitions of Supercomputing.

1: The first idea we considered is that a Supercomputer is that
machine which will run your problem the fastest. Notice it's your
problem, and it's not necessarily the machine you are currently using,
and the concept of "fastest" is one you should control. You might
reasonably decide that it's total through-put time that matters rather
that how fast one can do arithmetic.

2. Showing some real insight, Ken Batcher stated that,
"A Supercomputer is one that will change your compute-bound problem into
one that is I/O-bound." Notice here that the limitations related to
I/O are intrinsically tied to the nature of the problem. Words like
"entry-level," and "mini-super" have entered the lexicon via
over-zealous marketers and salesmen in order to sell things that are
not necessarily supercomputers. (The term itself first appeared about
twelve years ago. Who first used it is subject to argument, but Jack
Worlton was one of the first to speak about such machines, and today is
a leading advocate in trying to change the term to (Ultra) High Speed
or Performance Computing, because as he notes, marketers have
thoroughly trashed the original meaning of "Supercomputer.")
Typically the inadequacies of small computers reside largely in their
inability to support adequate memory traffic and FAST I/O.

3. Still another kind of insight: Neil Lincoln says that a
Supercomputer is one that is only one generation away from being what
you really need.

We can observe here that it is the application that decides if you're
using a Supercomputer, and finally to put extra stress on the idea that
"the concept of "fastest" is one you should control." consider what you
would hold important if all the arithmetic in your problem took zero
time. How fast would your problem run, and now what would you do to
make it go faster.

I hereby apologize for the obscene length of this polemic.

0. originally a main distinguishing characteristic of big machines was the
amount of storage (memory and disk and tapes available.
1. High speed computer development used to (until ~1968) be a source of
design and components for less capable (and cheaper) machines.
2. Most of the applications that run on supercomputers aren't 'real'
supercomputer problems. It's just cheaper and faster to run on a big
3. Certain applications imply a complexity level that connot be done on
anything but a supercomputer. Typically these are characterized by at least
one of the following: much larger than average number of computations per
mass point; unusual kind of computational needs per mass point; very large
number of mass points; a real time requirement;sheer volume of work.
4. As Gene says, the results don't have to be accurate, merely
consistent. And I add, physically defensible.
5. Considering the part of the user community that makes real use of
Supercomputers, and on the basis of direct talks with many such perrsons,
I say that none of these real users make any use of Linpack results which
convey no data useful to these real applications. The people who run say,
the community weather model represent a form of application that has hardly
any connection with Linpack; what Linpack may show has hardly import to the
algorithms the weather models use.

Cray Research managed to build it's first shippable Cray 1 for
under $10 Million. Of course, it did not have much software.
Convex managed to build it's first shippable C-1 for around $20 Million,
including software.
Others have also gotten to market with innovative hardware for a lot less
than $100 Million. Critical issues for startups:
1) Know what your initial target market is and understand what it requires.
2) Maintain focus and don't allow significant investment in anything that
does not aid that market.
3) Do everything you can to minimize time to market. Every extra month just
burns more money.
4) Keep your initial staff small (just large enough to do the job).
More people mean more time spent communicating instead of doing.
If you are very selective in hiring, only accepting the top 10% of
potential candidates, and motivating them with "average salaries and
extraordinary stock options, plus exciting work", you should be able
to get several times industry average effort and productivity.
5) Use other people's work whenever possible - like starting with Unix as
an OS instead of inventing your own. Look for strategic partnerships
in as many areas as possible. These efforts will help (3) and (4).
6) Be lucky. :-)
[It helps to be first one to market in your niche, or for your target
market to experience a business boom just as your product is ready.
It also helps for your vendors to deliver what they promise. The
reverse of any of these can sink you. Examples abound.]

Memorial contributions be
made to the Pikes Peak Area Trails Coalition, 1426 N. Hancock, Suite 4,
Colorado Springs, CO 80903 or the Seymour R. Cray memorial at the
University of Minnesota.

The following interview with Seymour Cray appeared in the December 1982
issue of Datamation magazine. The interviewer, Jan Johnson, posed the
same questions to four people: Gene Amdahl, Victor Poor, James Thornton,
and Seymour Cray. There were also follow-up questions addressed to
individuals. Transcribed below are the questions addressed to Cray, and
his answers. All ellipses are in the text as it appeared in Datamation.

A lot of Cray's answers are memorable.

Tom Ace

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Datamation: What technological developments in the past five to 10 years
have had the biggest impact on your niche of the computer industry?

Cray: I guess there haven't been any.

Datamation: No developments in mathematics, architecture, or in new ways
of looking at things?

Cray: Well, the problem I have with probably most of these questions is
that I don't pay much attention to what is going on in the world. I just
do my own kind of work, so if there were something new in mathematics, I
wouldn't know about it.

Datamation: What have been some of the driving forces behind the changes
in your niche of the industry?

Cray: If Fairchild would quit trying new technology out on us, we'd get
our parts a lot faster. They are always giving us this new technology
and of course it doesn't work, so they have to keep trying it again. It
seems like a real deterrent to getting our job done because it's never
necessary to have the new technology.

Datamation: What do you call state of the art?

Cray: I suppose that it is whatever you can do.

Datamation: Do you believe the days of big advances in computing
technology are gone?

Cray: Looking at my own work, since I have such a narrow view of things
here, the advances don't seem to be any different than they ever were.
Between the Cray 1 and the Cray 2 the clock rate dropped from 12.5 ns to 4 ns.
That's the same sort of geometric progression we've had in the past.
Performance of the machine is six to 12 times the previous model,
which is more aggressive than it has been in the past. It seems that we
are progressing at about a constant rate. I don't see that changing in
the next machine, either.

What do you see as the next big advance in your part of the industry?

Cray: I guess the big change we need now is in materials. We have a
project investigating areas in chemistry. We need different materials
than silicon. The U.S. technology has been locked into silicon because
the manufacturing facilities are locked in. We can't break out and
create new directions into anything else because everything has been set
up. It's the same problem as in the automobile and steel industries.
Right now, there are a lot of management people in the large semiconductor
companies who are getting nervous about the situation. They can see the
situation being a locked-in one. But they have just recently recognized
it. They should have recognized it four or five years ago. Now they
don't have time to make the conversion to meet my purposes. My effort
is not going to be inhibited; I can find no one to help me, so I am
proceeding with gallium arsenide. It's not my choice; the only place
we can buy is in Japan, and I don't want to do that. All the Japanese
machines are going to be made with gallium arsenide.

Datamation: So you're making the chips?

Cray: That's what I'm saying. It's not my choice.

Datamation: How far away do you think your project is?

Cray: The first delivery is in fall of '86 and it's a three-year program.
So we have to develop it in '83 to ship it in '86.

Datamation: What problems do you have in working with gallium arsenide?

Cray: Well, it's hard to pronounce. Once you get over that ...

Datamation: What do you run into?

Cray: Indium phosphide.

Datamation: What have been some of the limitations you've encountered in
your niche of the industry?

Cray: I suppose the limitations are just the visions of the designer; there
aren't any physical limitations. I can't see very far ahead, so I just
take small steps--and I keep taking small steps because I don't want to
retire yet. For myself, it's always been a matter of not being able to
communicate well enough with other people to get any help from them; so I
do it myself. My limitations, then, are what I can do in my own personal
time. I don't use special tools when I design except paper and pencil.
If you are looking for barriers, I don't think there's any one physical
barrier. It's only the ability to conceive of the next step. It's always
easy to do the next step and it's impossible to do two steps at a time. I
think it's appropriate to say that each step is rather evolutionary, so of
course you use what you learn from the previous step. I don't think I've
done anything radical in my entire career. It's just been a series of
small steps. It's just a matter of having the imagination to do the next step.

Datamation: What has surprised you most in how your products are used?
What are you learning about the use of your products?

Cray: I just design these things for myself. I'm always surprised when
other people use them. I don't know what all this supercomputer talk is
about. They certainly aren't supercomputers; they are kind of simple,
dumb things.

Datamation: But they run fast and apparently that is making a big impression.

Cray: Apparently that is important.

Datamation: What has surprised you most about your competitors?

Cray: You mean there are some of those? There probably are--I just
haven't looked.

Datamation: When you take a look at the industry today ...

Cray: I never look back and I never look sideways.

Datamation: Do you ever worry or think ...

Cray: Never, never!

Datamation: What has surprised you most about your market?

Cray: I certainly have been surprised by the market. We keep selling
computers to the same old people and they are getting old at the same
rate that I am. We don't even need introductions when we come out with
a new computer because we already know the people. It's just the same
market for us over and over again. We sell a machine a month. We've
always sold a machine a month. Pretty soon those people are going to
start dying off--then what's going to happen?

Where Seymour Cray lived:
1925-1943 Chippewa Falls, WI (birth through H.S. graduation)
1943-1947 Military
1947-1951 Minneapolis, Minn. (College)
1951-1962 Minneapolis (ERA, Control Data)
1962-1989 Chippewa Falls, WI (Control Data, Cray Research)
1989-1996 Colorado Springs, CO (Cray Research, Cray Computer, SRC)

So here's where he lived in terms of fractions of his life:
63% of his life in Chippewa Falls, Wisconsin (45/71)
21% of his life in Minneapolis, Minnesota (15/71)
10% of his life in Colorado Springs, Colorado (7/71)

"I enjoy the pessimism of the rest of the world." -- Seymour Cray

"Nobody, and I mean nobody, knows how to program large parallel machines."
-- Seymour Cray (needs confirming source)

"Don't do anything that other people are doing.
Always do something a little different or significantly different if you can...

Every time you take a new approach, new ingredients, you increase risk.
But it was my feeling, that the rewards would come often enough so that
taking those kinds of risks would have a long-term benefit.
And, I think they did during my career."

-- Seymour Cray, 1994



S/N-3 is still at NCAR, albeit not in production.
[This may have changed.]
Display at the bottom of the stairs at the far end of the main lobby at
the Mesa Lab. (NCAR also had S/N-14, which is in the Smithsonian Air
And Space Museum in Washington D.C. the last time I was there -- for
SC'94 I think.) The next time any of y'all are out this way, stop by
and take a gander. The Mesa Lab itself, designed by I. M. Pei, is worth
a look all by itself (view Woody Allen's Sleeper before visiting),
and the network of hiking trails in the Flatirons
behind the Lab are worth spending some time on, too. Avoid the deer,
bears, and rattlesnakes. Eat lunch in the cafeteria; the food's not bad,
and what the facility may lack in tablecloths it makes up for in the view.

"I have really strong feelings about that," he said.
"I feel the bigger the group that works on the project,
the lower the chances for success. I'm appalled at our trying to make a
country-wide coordinated effort. I just can't imagine it ever being successful.
"I believe you want a lot of independent people thinking their own thoughts
and trying their own things.

We're not going to participate in any national effort,
and I don't want any money from the government.
We've got competition within the company.
I've got a group here five miles away who I know are trying to outdo me."

Hacker's Dictionary Entries

cray /kray/ n.

1. (properly, capitalized) One of the line of supercomputers designed by
Cray Research. 2. Any supercomputer at all.
3. The canonical number-crunching machine.

The term is actually the lowercased last name of Seymour Cray,
a noted computer architect and co-founder of the company.
Numerous vivid legends surround him, some true and some admittedly
invented by Cray Research brass to shape their corporate culture and image.

cray instability n.

1. A shortcoming of a program or algorithm that manifests itself only when
a large problem is being run on a powerful machine (see cray).
Generally more subtle than bugs that can be detected in smaller
problems running on a workstation or mini. 2. More specifically,
a shortcoming of algorithms which are well behaved when run on
gentle floating point hardware (such as IEEE-standard or DEC) but which
break down badly when exposed to a Cray's unique `rounding' rules.

crayola /kray-oh'l*/ n.

A super-mini or -micro computer that provides some reasonable percentage of
supercomputer performance for an unreasonably low price.
Might also be a killer micro.

crayon n.

1. Someone who works on Cray supercomputers. More specifically,
it implies a programmer, probably of the CDC ilk, probably male,
and almost certainly wearing a tie (irrespective of gender).
Systems types who have a Unix background tend not to be described as crayons.
2. A computron (sense 2) that participates only in number-crunching.
3. A unit of computational power equal to that of a single Cray-1.
There is a standard joke about this usage that derives from an
old Crayola crayon promotional gimmick:
When you buy 64 crayons you get a free sharpener.
crayon n.
1. Someone who works on Cray supercomputers. More specifically,
it implies a programmer, probably of the CDC ilk, probably male,
and almost certainly wearing a tie (irrespective of gender).

Nothing could be further from the truth. I cannot recall *ever* seeing a
programmer at CRI (or CCC) in a tie, unless it was needed to impress
a customer - and often not then. At a place where the Founder and CEO
typically wore flannel shirts, and could usually be found up to his elbows
in a piece of recalcitrant hardware, the phrase "dress for success" had
a somewhat atypical meaning.
[Actually, I can't change that here; that's what's printed by MIT Press.
We need to get that revised in the Jargon list.]

Real Programmers (from Ed Post)

What kinds of tools does a Real Programmer use?
In theory, a Real Programmer could run his program by
keying them into the front panel of the computer.
Back in the days when computers had front panels, this
was actually done occasionally.
Your typical Real Programmer knew the entire bootstrap loader by
memory in hex, and toggled it in whenever his program destroyed the bootstrap.
Back then, memory was memory - it didn't go away when the power was turned off.
Today, memory either forgets things when you don't want it to,
or remembers things long after they're best forgotten.
Legend has it that Seymour Cray (who invented the CRAY-1 supercomputer,
and most of Control Data's computers) actually toggled the
first operating system for the CDC-7600 in on the front panel from memory
when it was first powered on. Seymour, needless to say, is a Real Programmer.

From http://bush.cs.tamu.edu/~erich/misc/program

This section is the Allen Gottlieb section.

"The trouble with programmers is that you can never tell what a
programmer is doing until it's too late."
-- Seymour Cray @ CIA ~mid-1970

I guess it's not just great minds that think alike. :-)
--Jim Davies


In article <5j7ntp$bij$***@cnn.nas.nasa.gov> you write:
2) When he designed the first Cray-1, s/n-1, Mr. Cray used RAM chips with
straight parity. The system was installed at the Los Alamos National
Laboratory. It averaged 20 minutes of blinding speed per system failure
(due to a parity error in memory). This was obviously a problem, so, after
consulting with the LANL folks ...
My understanding (from my initial indoctrination at CRI back in'81) was that
SECDED was added at *NCAR's* insistence, not LANL's. Might want to check
this out with Vince Wayland (the original Cray AIC at NCAR) or Bob Walan
(who sold the NCAR machine).
A bright student or architect somehow manages to get time to visit
Seymour. Cray will listen to that student's ideas and nod understanding
or disagreement. He listens to a few ideas, but he makes a comment like
"Sounds good."
But that does NOT mean that Seymour will take the idea and place it into
His architectures. Too many people with improvements attempt
If your idea is so good, why don't YOU run with it? Leave my ideas
(and his infrastructure) to me.
Well, I don't know about this...but I do know that Seymour asked us poor,
benighted programmers for suggestions when the Cray-4 was being designed.
Amazingly enough, we got a *lot* of what we asked for. We also got explanations
for why some of our requests simply weren't doable.
Note: from the Cray-2 (immersion) to the Y-MP (chilled) on,
CRI machines have been cooled by Fluorinert (tm) [a 3M product].
Prior that that they were chilled using Freon. The C-3 (CCC machine)
was also immersion cooled.
As was the Cray-4, just FYI.
"The future is seldom the same as the past" - Seymour Cray, 6/4/95.
Hey, don't I get credit for reporting this one?
Date: Wed, 15 Mar 1995 09:26:56 -0700
Subject: Re: cray
|We have a Cray-2 prototype here called snq2 (q for quadrant),
snq2 isn't the thing that Newt Perdue had a photo of in the aquarium?
I don't really know; I've only worked here, not at CRI. I think
there was a snq1 also, which would have been the first prototype.
Snq1 was the single-quad machine initially delivered to Livermore. Q2 was the
machine delivered to the University of Minnesota.
S7? Not a serial number is it?
Sort of; it's the "seventh" Cray-3 tank we built. Numbers S1 through S4
are one-octant tanks, S5 and S6 are two-octant tanks (S5 is the one
at NCAR which they call "greywolf"), and S7 is the only four-octant tank.
The module sets have tended to roam freely between tanks as needed
(e.g. S6 hasn't ever spent any long periods in production because it
gets used to test spare and replacement modules for S5 and S7).
S4 has been converted to a 2-octant tank for use as the PIM system.
S1 through S3 are essentially gone at this point -- replaced by Cray-4
Actually, they were *modified* to be Cray-4 tanks.
Yes. They're not quite like the old T registers, in that they have
data paths to the A, S, and V regs. We're using them for argument
passing, stack pointers, scalar register spills, scratch space for
vector reductions, etc. (i.e. the same sort of things local memory
And V<->S transfers (that was one of the things we *didn't* get).

2) The two full prototype machines, which were 1 CPU each, sn-Q1 &
sn-Q2. <<Locally called Quarter-Horses because they were a quarter
of a Cray-2, which had 4 CPUs.>> The UofMinnesota Super Computer
Center had one of these systems for a long time. ... The other spent
time at the 1440 Northland Drive (Mendota Heights, Mn.).
FYI: SNQ2 ended its life at Cray Computer in Colorado Springs.
It was shut down for good about the time the first Cray-3's became
operational in 1993-ish.

Ducky Day
1440 Northland

Move this text around:

Some additions for panel 18.

[immersion cooling]

ETA used liquid nitrogen immersion on its E and G models (10.5 and 7 ns
clock respectively)

[time line]

Lloyd Thorndyke, in his address to the Frontiers of Supercomputing
Conference at LANL in 1994, said the words "Engineering Technology
Associates Systems" in reference to ETA Systems. However, when asked,
Neil Lincoln would say "ETA is not an acronym, it doesn't stand for
anything." Many think ETA is a play on ERA, the original TLA for
the first Norris company. ETA also is a TLA for the Basque terrorist
organization, FWIW. :-)

You might as well take CDS off the time line, they don't make computers
any more. [Well actually that's just a terminal state, so I'll keep it,
but not grow it.]

[vector machines]

The Star-100 was the first memory-to-memory vector machine produced
by CDC. Its lifetime was from the early '70s to the early '80s. It
was all low density ECL, on boards that slid over "cold bars", water
chilled. The Cyber 203 followed, and was half low density and half
high density (IC); the Cyber 205 was all IC's. There were only a handful
of 203 machines, all in the early 1980's. The 205's lifespan was
around 10 years, from the early '80s to the early '90s. These machines
were memory-to-memory vector, highly CISC. Many instructions were
microcoded. The term "long vector" is relative; the R 1/2 speed varied
on the machines; several hundred operands (64-bit words) on the Star,
down to as few as 60-80 on the 205, depending on the instruction. The
ETA machines reduced that further, down into the 10-20 range on some
instructions with "vector shortstop", a technique to feed the results
back into the vector pipes using shorter paths and less-than-full
pipe results. Scalar shortstop, with the 256 64-bit register set,
had been in place since the Star.

These machines were always compared to the vector register machines
from CRI from an architectural standpoint.

[how many instructions]
The Star-100 had 231 different instructions. Its successor machines
had slightly less, each successor paring it down a bit. The ETA
instruction set was quite a bit different in that it had CM instructions
to help with semaphores and locks and such. However, even the ETA
instruction set had several dozen instructions that were a near exact
match for their Star-100 predecessors.

Articles: comp.parallel
Administrative: ***@cse.ucsc.edu.SNIP
Archive: http://groups.google.com/groups?hl=en&group=comp.parallel
Cydrome Leader
2008-01-22 18:28:33 UTC
Post by Eugene Miya
Archive-Name: superpar-faq
Last-modified: 8 Feb 2006
This was the best posting ever.

[Moderator: Oh, a typo in the header.... The visible and behind the
scenes history of what went into this panel is quite entertaining.]