The CalcUA infrastructure currently consists of the clusters Leibniz and Vaughan (and Breniac), providing a total of 367 compute nodes which deliver more than 675 teraflop of computing power.
Vaughan was installed in the summer of 2020 and 2021. It is a NEC system consisting of 152 nodes with two 32-core AMD Epyc 7452 Zen2 generation CPUs connected through an HDR100 InfiniBand network. This cluster also contains 3 nodes for GPU computing (NVIDIA Ampere and AMD MI100).
In the spring of 2023, the system was extended with 40 nodes with two 32-core AMD EPYC 7543 Zen3 generation CPUs.
- 2 login nodes, accessible via login-vaughan.hpc.uantwerpen.be
- 152 compute nodes for a total of 6656 cores, with 256 GB RAM each
- 40 compute nodes for a total of 2560 cores, 24 with 256 GB RAM and 16 with 512 GB RAM
- 1 GPU node with four NVIDIA Ampere A100 GPUs with 40 GB HBM2e
- 2 GPU nodes with two AMD Instinct (Arcturus) MI100 GPUs
- InfiniBand HDR100 interconnect
Leibniz was installed in the spring of 2017. It is a NEC system consisting of 152 nodes with dual 14-core Intel E5-2680v4 Broadwell generation CPUs connected through an EDR InfiniBand network. This cluster also contains a node for visualisation, 2 nodes for GPU computing (NVIDIA Pascal architecture), one node with dual NEC SX-Aurora TSUBASA vector processors and one node with an Intel Xeon Phi expansion board.
- 2 login nodes, accessible via login-leibniz.hpc.uantwerpen.be
- 1 visualization node with a NVIDIA P5000 GPU, accessible via viz1-leibniz.hpc.uantwerpen.be
- 152 compute nodes for a total of 4256 cores, 144 with 128 GB RAM and 8 with 256 GB RAM
- 2 GPU nodes with two NVIDIA Pascal P100 GPUs with 16 GB HBM2
- 1 node with an Intel Xeon Phi 7220P PCIe card with 16 GB RAM
- 1 node with a NEC SX-Aurora TSUBASA model A300-2
- InfiniBand EDR interconnect
BrENIAC was a previous Tier-1 machine located at the University of Leuven that was in operation from 2016 until it was decommissioned in December 2022. It was a NEC system consisting of 580 nodes with Broadwell generation CPUs and was later extended with 408 nodes with dual 14-core Xeon Gold 6132 Skylake generation CPUs connected through an EDR InfiniBand network. 24 of the Skylake nodes were recovered for further use (replacing the even older Hopper nodes).
- 1 login node, accessible via login-breniac.hpc.uantwerpen.be
- 23 compute nodes for a total of 644 cores, with 192 GB RAM each
Hopper was in operation from late 2014 till the summer of 2020. It was a HPE system consisting of 168 nodes with two 10-core Intel E5-2680v2 Ivy Bridge generation CPUs connected through a FDR10 InfiniBand network. This cluster was moved out in the summer of 2020 to make space for the installation of Vaughan, but 24 nodes with 256 GB were recovered for further use.
Hopper was finally decommissioned in the summer of 2023.