CalcUA infrastructure

The CalcUA infrastructure currently consists of the clusters Hopper and Leibniz, providing a total of 320 compute nodes which deliver more than 240 teraflop of computing power.

More details? VSC website

Cluster hardware


Leibniz was installed in the spring of 2017. It is a NEC system consisting of 152 nodes with dual 14-core Intel E5-2680v4 Broadwell generation CPUs connected through an EDR InfiniBand network. This cluster also contains a node for visualisation, 2 nodes for GPU computing (NVIDIA Pascal generation) and one node with an Intel Xeon Phi expansion board.

  • 2 login nodes, accessible via
  • 1 visualization node with a NVIDIA P5000 GPU,
    accessible via
  • 152 compute nodes for a total of 4256 cores,
    144 with 128 GB RAM and 8 with 256 GB RAM
  • 2 GPU nodes with two NVIDIA Tesla P100 GPUs
    with 16 GB HBM2 memory per GPU
  • 1 node with an Intel Xeon Phi 7220P PCIe card with 16 GB RAM
  • InfiniBand EDR interconnect


Hopper was installed in the spring of 2014. It is a HPE system consisting of 168 nodes with dual 10-core Intel E5-2680v2 Ivy Bridge generation CPUs connected through a FDR10 InfiniBand network.

  • 4 login nodes, accessible via
  • 168 compute nodes for a total of 3360 cores,
    144 with 64 GB RAM and 24 with 256 GB RAM
  • 100 TB central GPFS storage (DDN SFA7700)
  • InfiniBand FDR10 interconnect

Cluster software

User software

  • Applications : ABINIT, CP2K, Gaussian, Gromacs, NWChem, Quantum Espresso, R, Siesta, ...
  • Compilers : J2EE, GCC, Intel Cluster Studio
  • Libraries : Intel MKL, OpenBLAS, FFTW, HDF5, OpenMPI, ...

System software

  • Operating system : CentOS 7.X
  • Scheduling subsystem : Torque and MOAB
  • Monitoring subsystem : Ganglia, Nagios and CMU