Ferranti System Architecture
This page describes the infrastructure that makes up the Ferranti Cluster.
Login Nodes
There will be two login nodes for Ferranti:
TBD
Feature | Specifications |
---|---|
CPUs: | 2x Intel Xeon Gold 6430 CPUs (32 Cores, 2.1 GHz) |
RAM: | 1024GB DDR5-4800 |
HCA: | 2x NVidia Mellanox ConnectX-7 NDR200 Adapter |
Ferranti: Compute Infrastructure
The 5 GPU compute nodes with the following specifications:
Feature | Specifications |
---|---|
Total Nodes: | 5 |
Accelerators: | 8 Nvidia H100 in SXM5 / node |
FP32 Cores: | 16896 / card |
FP64 Cores: | 8448 / card |
NVIDIA Tensor Cores: | 528 / card |
GPU Memory: | 80GB HBM3 / card (bandwidth: 3.35TB/s) |
CPUs: | 2x Intel Xeon Platinum 8468 CPUs, 48 cores, 2.1 GHz |
RAM: | 2048GB DDR5-4800 |
HCA: | 6x NVidia Mellanox ConnectX-7 NDR400 Adapter |
Local Storage: | 96TB NVMe in RAID0 |
Theoretical Peak Performance: | -- |
Ferranti: Interconnect
Ferranti has Fat Tree interconnect topology with the following composition:
Feature | Specifications |
---|---|
Number of core switches: | 4 |
Number of edge switches: | 6 |
Interconnect topology and type: | NDR InfiniBand Fat Tree, non-blocking |
Blocking factor: | 1:1 |
Switch type: | Nvidia NDR switch with bandwidth of 400 Gb/s per port. |