Cascade Lake (CLX) Compute Nodes
Frontera hosts 8,008 Cascade Lake (CLX) compute nodes contained in 91 racks.
Table 4. CLX Specifications
|Model:||Intel Xeon Platinum 8280 ("Cascade Lake")|
|Total cores per CLX node:||56 cores on two sockets (28 cores/socket)|
|Hardware threads per core:||2 Hyperthreading is not currently enabled on Frontera|
|Hardware threads per node:||56 x 2 = 112|
|RAM:||192GB (2933 MT/s) DDR4|
|Cache:||32KB L1 data cache per core;
1MB L2 per core;
38.5 MB L3 per socket.
Each socket can cache up to 66.5 MB (sum of L2 and L3 capacity).
|Local storage:||144GB /tmp partition on a 240GB SSD.|
Frontera's four login nodes are Intel Xeon Platinum 8280 ("Cascade Lake") nodes with 56 cores and 192 GB of RAM. The login nodes are configured similarly to the compute nodes. However, since these nodes are shared, limits are enforced on memory usage and number of processes. Please use the login node for file management, compilation, and data movement. Any computing should be done within a batch job or an interactive session on compute nodes.
The interconnect is based on Mellanox HDR technology with full HDR (200 Gb/s) connectivity between the switches and HDR100 (100 Gb/s) connectivity to the compute nodes. A fat tree topology employing six core switches connects the compute nodes and the
$SCRATCH filesystems. There are two 40-port leaf switches in each rack. Half of the nodes in a rack (44) connect to 22 downlinks of a leaf switch as pairs of HDR100 (100 Gb/s) links into HDR200 (200 Gb/s) ports of the leaf switch. The other 18 ports are uplinks to the six cores switches. The disparity in the number of uplinks and downlinks creates an oversubscription of 22/18.