New Scientist article
Some brief specs:
- Cray XT4 with 84 racks
- Cray XT5 with 200 racks
- XT4 board, four nodes, one AMD quad-core Opteron 1354 Budapest, 8-GB DDR2-800
- XT5 board, four nodes, two AMD 2356 quad-core Barcelona 8-GB DDR2-800
- SuSE Linux
- Node-node communication: MPI, OpenMP SHMEM, PGAS
- Liquid cooling with R-134a refrigerant on the entrance and exit of air
- Lustre-based shared file system
- Storage: 48 DDN S2A9900 = 13,440 1-TB SATA drives and 192 Dell OSS servers.
- High-speed intra-cluster network: Infiniband DDR @ 889 GBps on three 288-port Cisco 7024D IB switches. 48 24-port Flextronics IB switches. Zarlink IB optical cables.
- External networks...
- 2 Cisco 6500 routers and a Force10 E1200 router
- Internet2: 1 OC-192 connection
- DOE ESnet: 1 OC-192
- DOE Ultrasciences: 2 OC-192 connections
- TeraGrid: 1 OC-192
- Archival storage on 28 Dell servers, two STK PowderHorn robot with 14 STK 9840 tapd drives and 11,000 tapes. Two Sun StorageTek SL8500 robots with 16 9940, 24 T10000A and 24 T10000B tape drives and 9800 tapes. Four DDN 9550 with 1500 TB of disk for disk caching tier.
- Job scheduling is apparently via PBS (portable batch system).
The number one system in Nov 2008 was the Roadrunner IBM BladeCenter system at LANL at 1.11 pf max and 1.46 pf peak.
Why am I posting this? Well, it's a nice super computer and I think a record at computing capacity (I wonder if anyone is actually turning out data from the combination of linked systems yet) and I was interested in the exercise of digging the specs out and seeing what they were actually using.
If OC-1 =~ 55 Mbps, then OC-192 =~ 10.1 Gbps (okay, I found 9953 Mbps).