Resources
The University of South Carolina High Performance Computing (HPC) clusters are available
to researchers requiring specialized hardware resources for computational research
applications. The clusters are managed by Research Computing (RC) in the Division
of Information Technology.
High Performance Computing resources at the University of South Carolina include the
USC flagship Hyperion HPC cluster, which consists of 356 nodes and provides a total
core count of 16,616 CPU cores. The cluster is a heterogeneous configuration, consisting
of 291 compute nodes, 8 Big Memory nodes, 53 GPU nodes, a large SMP system, and 2
IBM Power8 quad GPU servers. All nodes are connected via a high speed, low latency
InfiniBand network at 100 Gb/s, and a 1.4 Petabyte high-performance GPFS scratch filesystem,
with 450 Terabytes of 1Gb/s home directory storage. Hyperion is housed in the USC
data center, which provides enterprise-level monitoring, cooling, power backup, and
Internet2 connectivity.
Research Computing clusters are available in job queues under the Bright Cluster Management
system. Bright provides a robust software environment to deploy, monitor and manage
HPC clusters.
Hyperion
Hyperion is our flagship cluster intended for large, parallel jobs and consists of
356 compute, GPU and Big Memory nodes, providing 16,616 CPU cores. Compute and GPU
nodes have 128-256 GB of RAM and Big Memory nodes have 2TB RAM. All nodes have EDR
infiniband (100 Gb/s) interconnects, and access to 1.4 PB of GPFS storage.
Bolden
This cluster is intended for teaching purposes only and consists of 20 compute nodes
providing 460 CPU cores. All nodes have FDR infiniband (54 Mb/s) interconnects and
access to the 300 TB of Lustre storage.
Maxwell (Retired)
This cluster was available for teaching purposes only. There were about 55 compute
nodes with 2.8 GHz and 2.4 GHz CPUs each with 24 GB of RAM.
Historical Summary of HPC Clusters
HPC Cluster |
Hyperion Phase IV |
Hyperion Phase III |
Hyperion Phase II |
Hyperion Phase I |
Bolden |
Maxwell |
Status |
Testing |
Active |
Retired |
Retired |
Teaching |
Retired |
Number of Nodes |
|
356 |
407 |
224 |
20 |
55 |
Total Cores |
|
16,616 |
15,524 |
6,760 |
400 |
660 |
Compute Nodes |
|
295 |
346 |
208 |
18 |
40 |
Compute Node Cores |
|
64 |
48 |
28 |
20 |
12 |
Compute Node CPU Speed |
|
3.0 GHz |
3.0 GHz |
2.8 GHz |
2.8 GHz |
2.4 GHz or 2.8 GHz |
Compute Node Memory |
|
256 GB or 192 GB |
192 GB or 128 GB |
128 GB |
64 GB |
24 GB |
GPU Nodes |
|
1 DGX
|
9 Dual P100
|
9 Dual P100 |
1 K20X |
15 M1060 |
|
(1) 8x A100 |
44 Dual V100 |
|
|
|
|
44 Dual V100 |
|
|
|
|
GPU Node Cores |
|
48 or 28 |
48 or 28 |
28 |
20 |
12 |
GPU Node CPU Speed |
|
3.0 GHz |
3.0 GHz |
2.8 GHz |
2.8 GHz |
2.4 GHz or 2.8 GHz |
GPU Node Memory |
|
192 GB |
128 GB |
128 GB |
128GB |
24 GB |
Big Memory Nodes |
|
8 |
8 |
8 |
1 |
0 |
Big Memory Node Cores |
|
64 |
40 |
40 |
20 |
|
Big Memory CPU Speed |
|
3.0 GHz |
3.0 GHz |
2.1 GHz |
2.8 GHz |
|
Big Memory Node Memory |
|
2.0 TB |
1.5 TB |
1.5 TB |
256 GB |
|
Home Storage |
|
450 TB GPFS |
600 TB NFS |
300 TB Lustre
|
50 TB NFS |
|
|
|
|
50 TB NFS |
|
|
Home Storage Interconnect |
|
1 Gb/s Ethernet |
1 Gb/s Ethernet |
1 Gb/s Ethernet |
1 Gb/s Ethernet |
1 Gb/s Ethernet |
Scratch Storage |
|
1.4 PB |
1.4 PB |
1.5 PB |
300 TB |
20 TB |
Scratch Storage Interconnect |
|
100 Gb/s EDR Infiniband |
100 Gb/s EDR Infiniband |
100 Gb/s EDR Infiniband |
54 Gb/s FDR Inifiniband |
40 Gb/s QDR Infiniband |
Name |
Number of nodes |
Cores per node |
TotalCores |
Processor speeds |
Memory per node |
Disk Storage |
GPU Nodes |
Big Memory Nodes |
Interconnect |
Status |
Hyperion Phase III |
356 |
64 or 48 (Compute) 48 or 28 (GPU) 64 (Big Memory) |
16,616 |
3.0 GHz |
Compute (256 or 192 GB) GPU (192 or 128 GB) Big Memory (2.0 TB) |
450 TB Home (1 Gb/s Ethernet) 1.4 PB Scratch (100 Gb/s Infiniband) |
9 (Dual P-100) 44 (Dual V100) |
8 |
EDR Infiniband 100 Gb/s |
Active |
Hyperion Phase II |
407 |
48 or 28 (Compute) 48 or 28 (GPU) 40 (Big Memory) |
15,524 |
3.0 GHz |
Compute (128 GB) GPU (128 GB) Big Memory (1.5 TB) |
450 TB Home (1 Gb/s Ethernet) 1.4 PB Scratch (100 Gb/s Infiniband) |
9 (Dual P-100) 44 (Dual V100) |
8 |
EDR Infiniband 100 Gb/s |
Retired |
Hyperion Phase I |
224 |
28 (Compute) 28 (GPU) 40 (Big Memory) |
6,760 |
2.8 GHz (Compute, GPU) 2.1 GHz (Big Memory) |
Compute (128 GB) GPU (128 GB) Big Memory (1.5 TB) |
300 TB of Lustre storage 50 TB of NFS storage 1.5 PB Scratch (100 Gb/s Infiniband) |
8 |
8 |
EDR Infiniband 100 Gb/s |
Retired |
Bolden |
20 |
20 |
400 |
2.8 GHz |
64 GB |
300 TB |
1 |
1 |
FDR Infiniband 54 Gb/s |
Active |
Maxwell |
55 |
12 |
660 |
2.4 GHz/2.8 GHz |
24 GB |
20 TB |
15 (M1060) |
None |
QDR Infiniband 40 Gb/s |
Retired |