Go

Find A Location

Search

Center for High Performance Computing (CHPC)

 

In May 2015, the CHPC 2.0 went live. Our second generation cluster was designed in collaboration with our chosen vendor, Advanced Clustering Technologies, to meet our current and future computing needs with an emphasis on integrating General Purpose Graphical Processing Units (GPGPUs). 

Networking and Storage

The internal networking is based on 4 Quad data rate (QDR) Infiniband switches in a leaf/spine configuration. We have 150TB of integrated Lustre-based storage capable of sustained I/O operations in excess of 1GB/s. 

Management and Compute Nodes

The management node is a ACT Pinnacle 2X3601H8 system with 2 Intel 8-core Xeon E5-2630v3 CPUs and 64GB of DDR4 memory. 

There are 23 ACT Pinnacle 1FX3601 systems that function as our compute nodes. Each compute node has 2 Intel 8-core Xeon E5-2630v3 CPUs and 128GB of DDR4 memory operating at 2133MHz). 

We have 21 ACT Pinnacle 2XG3601H8 systems that function as both GPGPU and compute nodes depending on the load of the system. Each GPGPU node has 2 Intel 8-core Xeon E5-2630v3 CPUs and 128GB of DDR4 memory operating at 2133MHz), as well as 2 NVIDIA Tesla K20X GPGPUs. 

We have 2 specialty compute nodes: 1 Large memory/Large core ACT Pinnacle 2X4601H8x4 system with 4 Intel 8-core Xeon E5-4610v2 CPUs and 1TB of DDR3 (1600MHz) memory and 1 large GPU node, an ACT Custom 4U Pinnacle system with 2 Intel 8-core E5-2640v2 CPUs, 128GB of DDR3 (1866MHz) memory and 8 NVIDIA Tesla K20x GPGPUS. These last two systems are designed to meet the needs of some of the applications run on the cluster. 

In addition to the compute/GPGPU nodes, there are 2 login nodes based on ACT Pinnacle 2X3601H8 systems with 2 Intel 8-core Xeon E5-2630v3 CPUs, 64GB of DDR4 (2133MHz) memory. The login nodes had additional 10Gb/s dual-port network interface cards to facilitate the moving of data onto and off the cluster. 

Hosting and Capabilities

The network is provided by the Washington University Research Network (WURN). The login nodes, as well as the Data Transfer Node (DTN), are each connected via 10Gb providing a high-speed, non-firewalled connection to both the Internet and Internet2. 

The total system offers a theoretical maximum single-precision computation capacity of 394 TFLOPS (trillion floating point operations per second) and double-precision computation capacity of 192 TFLOPS.

Enter Title