Go

Find A Location

Search

Center for High Performance Computing (CHPC)

 

The center operates a custom designed IBM High Performance Computing system consisting of 9 X3650 Management and Interface nodes (10Gbps Ethernet) each with dual quad core processors, an iDataPlex Cluster consisting of 168 Dx360M2 nodes each with dual quad core Nehalem processors and 48GB RAM, an e1350 Cluster consisting of 7 X3950M2 Symmetric Multi-processor(SMP) servers each with 64 cores and 256GB RAM, 6 Penguin Relion 2800GT servers with 4 NVIDIA Tesla K20 GPUs each, a 288 port Qlogic Infiniband switch to interconnect all processors and storage nodes and 55 TB of high speed storage, of which 9TB is available to end users as working storage. These components are integrated into a multi-functional computing grid using xCAT and MOAB management software. Subsets of the system may be scheduled and used independently in a variety of configurations. All processors will run the latest version of the Redhat Linux operating system with each of the SMP systems running a single operating system image. The total system offers a theoretical maximum computation capacity of 18.6 TFLOPS (trillion floating point operations per second). This computing environment is connected directly to a dedicated 44TB high speed SAS storage array that is 1 of two linked clusters that make up the 562 TB BlueArc storage system. The connection is via redundant, dedicated 10Gbps links and to the 10Gbps backbone of the Mallinckrodt Co-operative research network and Washington University Network backbone. This network provides high-speed connectivity to Washington University research laboratories, high-speed connectivity to both Internet I and II (10Gbps) and virtual private networks to provide secure access to information for research programs.

Login Nodes - The main systems that users interact with.

  • 2 x IBM x3650-m2 nodes
  • Each node has 8 cores @ 2.27 GHz
  • 48 GB RAM/node

iDataPlex Nodes - The work-horses of the cluster.

  • 168 x IBM dx360-m2 nodes
  • Each node has 8 cores @ 2.67 GHz
  • Hyper-threading is disabled
  • Turbo-mode is enabled
  • 48 GB RAM/node 

SMP Nodes - Useful for large-memory jobs or threaded applications that can make use of more than 8 cores.

  • 7 x IBM x3950-m2 nodes
  • Each node has 64 cores @ 2.40 GHz
  • 256 GB RAM/node

 GPU Nodes - For applications that can utilize graphical processing units

  •  6 x Penguin Relion 2800GT nodes
  • Each node has 8 cores @ 3.30 GHz
  • 64 GB RAM/node
  • Each node has 4 NVIDIA Tesla K20 GPUs

Infiniband Interconnect - A high speed, low latency network for the entire cluster.

  • QLogic Silverstorm 9240
  • 4X DDR Infiniband
  • supports up to 288 ports

GPFS Storage Cluster - A small, fast storage cluster.

  • 2 x IBM x3650-m2 storage nodes
  • 64 Fiber-channel hard drives
  • NFS using IPoIB

 Gateway Nodes - The gateway nodes provide faster internet acess to external storage nodes

BlueArc Scratch - Users that either purchase or lease space on the Radiology BlueArc system are entitled to access 44TB of additional scratch space

  •  The CHPC has a dedicated Mercury head for the BlueArc-scratch array
  • SAS 15K RPM drives
  • 10Gb network connection