Leibniz
Leibniz was installed in the spring of 2017. It is a NEC system consisting of 152 nodes with dual 14-core Intel E5-2680v4 Broadwell generation CPUs connected through an EDR InfiniBand network. This cluster also contains a node for visualisation, 2 nodes for GPU computing (NVIDIA Pascal generation) and one node with an Intel Xeon Phi expansion board.
- 2 login nodes, accessible via login-leibniz.hpc.uantwerpen.be
- 1 visualization node with a NVIDIA P5000 GPU,
accessible via viz1-leibniz.hpc.uantwerpen.be
- 152 compute nodes for a total of 4256 cores,
144 with 128 GB RAM and 8 with 256 GB RAM
- 2 GPU nodes with two NVIDIA Tesla P100 GPUs
with 16 GB HBM2 memory per GPU
- 1 node with an Intel Xeon Phi 7220P PCIe card with 16 GB RAM
- InfiniBand EDR interconnect
Hopper
Hopper was installed in the spring of 2014. It is a HPE system consisting of 168 nodes with dual 10-core Intel E5-2680v2 Ivy Bridge generation CPUs connected through a FDR10 InfiniBand network.
- 4 login nodes, accessible via login-hopper.hpc.uantwerpen.be
- 168 compute nodes for a total of 3360 cores,
144 with 64 GB RAM and 24 with 256 GB RAM
- 100 TB central GPFS storage (DDN SFA7700)
- InfiniBand FDR10 interconnect