BIH HPC Research Cluster
Compute Nodes
- 230 general purpose nodes (different generations)
- 32–96 CPU cores
- 128–384 GB RAM
- 5 dedicated high-memory nodes
- 80–96 CPU cores
- 750–4096 GB RAM
- 11 dedicated GPU nodes
- 7 × 4 NVIDIA V100 cards
- 1 × 10 NVIDIA A40 cards
- 3 × 4 NVIDIA L40 cards
File Storage
- 2 PB GPFS/Spectrum-scale storage (legacy; in phase-out)
- DDN hardware with native client access; 16x10 Gb/s ethernet
- 1.7 PB Tier 1 Ceph Cluster
- NVME SSD hot storage
- Fast file access
- High availability
- 7.4 PB Tier 2 Ceph Cluster (Main)
- HDD-based warm storage
- Highly scalable
- CephFS block storage & S3 object storage
- 6.7 PB Tier 2 Ceph Cluster (Mirror)
- HDD-based mirror storage
- Located in separate fire compartment
Openstack Service Cloud
- Ironic baremetal deployment of compute nodes
- Virtual hosts for various infrastructure services
Misc
- Slurm-based resource scheduling
- User auth via Charité and MDC Active Directory
- Secure file exchange services for internal and external use