Theia
Theia is a new AI-focussed cluster.
The University of South Carolina High Performance Computing (HPC) clusters are available to researchers requiring specialized hardware resources for computational research applications. The clusters are managed by Research Computing (RC) in the Division of Information Technology.
High Performance Computing (HPC) resources at the University of South Carolina (USC) are located in the USC data center, which provides enterprise-level monitoring, cooling, power backup, and Internet2 connectivity.
Research Computing HPC clusters are available through SLURM job management partitions (queues) and are managed using the Bright Cluster Management system. Bright provides a robust software environment to deploy, monitor and manage HPC clusters.
Theia is a new AI-focussed cluster.
Hyperion is our flagship cluster intended for large, parallel jobs and consists of 356 compute, GPU and Big Memory nodes, providing 16,616 CPU cores. Compute and GPU nodes have 128-256 GB of RAM and Big Memory nodes have 2TB RAM. All nodes have EDR infiniband (100 Gb/s) interconnects, and access to 1.4 PB of GPFS storage.
This cluster is intended for teaching purposes only and consists of 20 compute nodes providing 460 CPU cores. All nodes have FDR infiniband (54 Mb/s) interconnects and access to the 300 TB of Lustre storage.
This cluster was available for teaching purposes only. There were 55 compute nodes with 2.8 GHz and 2.4 GHz CPUs each with 24 GB of RAM.
HPC Cluster | Theia | Hyperion Phase III | Hyperion Phase II | Hyperion Phase I | Bolden | Maxwell |
---|---|---|---|---|---|---|
Status | Active | Active | Retired | Retired | Teaching | Retired |
Number of Nodes | 356 | 407 | 224 | 20 | 55 | |
Total Cores | 16,616 | 15,524 | 6,760 | 400 | 660 | |
Compute Nodes | 295 | 346 | 208 | 18 | 40 | |
Compute Node Cores | 64 | 48 | 28 | 20 | 12 | |
Compute Node CPU Speed | 3.0 GHz | 3.0 GHz | 2.8 GHz | 2.8 GHz | 2.4 GHz or 2.8 GHz | |
Compute Node Memory | 256 GB or 192 GB | 192 GB or 128 GB | 128 GB | 64 GB | 24 GB | |
GPU Nodes | 1 DGX |
9 Dual P100 |
9 Dual P100 | 1 K20X | 15 M1060 | |
(1) 8x A100 | 44 Dual V100 | |||||
44 Dual V100 | ||||||
GPU Node Cores | 48 or 28 | 48 or 28 | 28 | 20 | 12 | |
GPU Node CPU Speed | 3.0 GHz | 3.0 GHz | 2.8 GHz | 2.8 GHz | 2.4 GHz or 2.8 GHz | |
GPU Node Memory | 192 GB | 128 GB | 128 GB | 128GB | 24 GB | |
Big Memory Nodes | 8 | 8 | 8 | 1 | 0 | |
Big Memory Node Cores | 64 | 40 | 40 | 20 | ||
Big Memory CPU Speed | 3.0 GHz | 3.0 GHz | 2.1 GHz | 2.8 GHz | ||
Big Memory Node Memory | 2.0 TB | 1.5 TB | 1.5 TB | 256 GB | ||
Home Storage | 450 TB GPFS | 600 TB NFS | 300 TB Lustre |
50 TB NFS | ||
50 TB NFS | ||||||
Home Storage Interconnect | 1 Gb/s Ethernet | 1 Gb/s Ethernet | 1 Gb/s Ethernet | 1 Gb/s Ethernet | 1 Gb/s Ethernet | |
Scratch Storage | 1.4 PB | 1.4 PB | 1.5 PB | 300 TB | 20 TB | |
Scratch Storage Interconnect | 100 Gb/s EDR Infiniband | 100 Gb/s EDR Infiniband | 100 Gb/s EDR Infiniband | 54 Gb/s FDR Inifiniband | 40 Gb/s QDR Infiniband |