It appears that JavaScript is disabled in your web browser. Please enable it to have full system functionality available.

RSC PetaStream® - 1.2 PFLOPS per cabinet massivelly parallel supercomputer (MPSC)

181
RSC PetaStream Massivelly Parallel Supercomputer (MPSC)

RSC PetaStream® - 1.2 PFLOPS per cabinet massivelly parallel supercomputer (MPSC)

Revolutionary ultra-high dense HPC solution with direct liquid cooling supporting over 250K execution threads in one rack with just 1 m2 (10.8 sq. ft.) footprint ensuring investment protection in software optimization and development for future many-core platforms. RSC PetaStream solution sets the world’s record of computing density of 1PFLOPS per cabinet.

Benefits

Massively Parallel System to address ExaScale needs
•Right system for many-core processors
•The architecture that scales compute, storage and network to ExaScale levels

Programmability and investment protection
•Build on industry standard x86 architecture
•Utilize existing programming models, re-use existing apps
•Preserve investment into optimization forfuture many-core platforms

Flexible to meet specific customer requirements
•Choice of interconnects and topologies
•Provides options for innovative storageand interconnection designs

Energy efficiency
•Build on proven exhaustiveRSC direct liquid cooling technology
•Efficient and innovative power deliverysubsystem
•System management and monitoring

RSC PetaStream - Technical datasheet

RSC PetaStream (Default config)

Type Massively parallel architecture
Architecture Intel Many Integrated Core (MIC); Modes: Native, Symmetric, Offload
Performance 1.2PFLOPS per cabinet (RMax)
Compute resources 1024x Intel® Xeon Phi™ 7120D based nodes (total resources: 62.4K cores, 250K threads)
Memory 16 TB GDDR5
Interconnect Infiniband FDR 56Gbps / Intel® True Scale Infiniband / 10 GigE and other
Local Data Storage Up to 640x Intel® DC S3500/S3700 SSDs with total capacity up to 0.5PB
System management Fully integrated software stack for High Performance Computing “RSC BasIS”: Single System Management Point, Flexible Software Configuration System, Complex Supercomputer and Data Center view and management system.
Intel® Node Manager Technology
Operating System Optimized Linux microOS (kernel 2.6.38 and above)
Job management SLURM, PBS Pro, Moab, Platform LSF
Parallel File Systems Lustre, Panasas, FhGFS
Libraries, Compilers and Tools Intel® Cluster Studio XE 2013
Power type 400V DC
Power consumption Up to 380 kW per cabinet
Form factor Dual side access cabinet
Cooling RSC direct liquid cooling system: up to 400kW per Cabinet;
Customer cooling system direct integration supported
Dimensions H 2.2m (86.6 in.) x W 1.0m (39.4 in.) x D 1.0m (39.4 in.)
Power 400/230V (three-phase, neutral and ground)

 

 

 




Share Page