The latest release of Rocky Linux 8, being 8.10, was released on May 30, 2024.
Since then we have been preparing an upgrade of the CalcUA clusters to Rocky Linux 9.
Here we describe what will change and how this might affect you.

For the impatient: there is a free test period in July and August (see below)!

Upgrade to Rocky Linux 9.6

The latest version of Rocky Linux 9, being 9.6, was released on June 4, 2025.
This brings along new versions of kernel and driver packages, as well as for most system installed user packages.

Here's a quick overview of the main packages involved:

Packages
Rocky Linux 8
Rocky Linux 9

kernel

4.18.0-513.24.1

5.14.0-503.40.1

cgroups

v1

v2

ofed

5.8-4.1.5.0

25.04-0.6.1

slurm

25.05.0 (previously 23.11.10)

25.05.0

pmix

4.2.9

4.2.9

bash

4.4.20-5

5.1.8-9

gcc

8.5.0-26

11.5.0-5

glibc

2.28-251

2.34-168

python

3.6.8-39

3.9.21-2

perl

5.26.3-422

5.32.1-481

For a complete overview of most changes, please check the Release notes for Rocky Linux 9.6 or the (upstream) Release notes for Red Hat Enterprise Linux 9.6.

Users who have software which was compiled with or runs against system installed packages, will have to check that their code still runs! This might imply recompiling your code and/or recreating your (virtual) environments!

Slurm updated to version 25.05

Previously, we we running Slurm version 23.11.10 on our clusters (released on August 27, 2024).
The latest version of Slurm is 25.05.0, which was released on May 29, 2025.

During the last couple of weeks, all our clusters have already been updated to Slurm 25.05.
This should not have affected your workflows — but if it does, please let us know (see below).

For more information about this version, please check the Release Notes for Slurm 25.05 (highlights) or the CHANGELOG (for a more extensive list of changes). Note that the official Slurm Documentation has also been updated to the 25.05 release.

Module toolchains starting from 2023a

Starting with Rocky Linux 9, we will be offering only toolchains version 2023a or above (as well as the usual system and x86_64 toolchains).

Here's a short overview of some of the main modules and their versions:

Module
2023a
2024a
2025a (upcoming)

GCC

OpenMPI

PMIx

12.3.0

4.1.5

4.2.4

13.3.0

5.0.3

5.0.2

14.2.0

5.0.7

5.0.6

intel-compilers / imkl

iimpi

2023.1.0

2021.9.0

2024.2.0

2021.13.0

2025.1.1

2021.15.0

Python

Perl

3.11.3

5.36.1

3.12.3

5.38.2

3.13.1

5.40.0

Cuda

12.1.1

12.6.0

12.8.0

Since older toolchains will not be available anymore, this implies that users who are using modules from 2020a, 2021a and/or 2022a, will have to start using modules from 2023a or above! And this also includes checking your code and updating your jobs scripts!

We have recompiled almost all modules from 2023a and above, and we have also tried to include new versions for modules that existed in 2022a or earlier and for which no newer version currently exists in 2023a or above. Nonetheless, it could be that you are still using a module that is not available anymore, but which you still want to keep on using — if this is the case, please let us know (see below).

Note that, from now on, only requests for software that can be installed under toolchains 2023a or above will be granted. Also, compatibility of software with the Rocky Linux 9 operating system will be a hard requirement.

Free test period in July and August

In order to help you getting used to the new Rocky Linux 9 upgrade (including all other changes), and to make sure that you can adapt and test your code (so that it will keep on working once we have fully upgraded), we are announcing a free test period in July and August (see timeline below).

For this purpose, we have created a reservation named rocky9 which includes compute nodes that have already been upgraded to Rocky Linux 9 - so use reservation rocky9 to test your code! To do so, you should submit jobs to this reservation by using:

sbatch --reservation=rocky9

Note that this can also be used with srun and/or salloc to start an interactive job.

For our Breniac cluster, there's also a dedicated Rocky Linux 9 login node, which you can reach via

ssh vscxxxx@login9.hpc.uantwerpen.be

Note that you can submit jobs from this login node to any of the partitions by using the --partition option, along with the --reservation=rocky9 from above.

Migration timeline

We have planned a gradual migration of our cluster compute and login nodes during the summer holidays. This will hopefully give you plenty of time to make your software and/or workflow compliant with our new setup.

The migration timeline includes:

  • July and August 2025 : free test period (see above)
  • August 2025 : migrate more compute nodes (on a weekly basis)
  • (Begin-Mid) September 2025 : finalize migration

The migration should be fully completed before the beginning of the new academic year (2025-2026).

Here's a list of compute (and login) nodes that have already been upgraded to Rocky Linux 9
(this list will be updated if more nodes are added — last update: August 19): 

Cluster
Partition
Nodes
NodeList (compute/login)

Vaughan

zen2

zen3

zen3_512

ampere_gpu

arcturus_gpu

80 / 152

12 / 24

8 / 16

0 / 1

1 / 2

r[1,2]c[01-06]cn[1-4].vaughan
r3c[01-08]cn[1-4].vaughan

r6c[04-06]cn[1-4].vaughan

r6c[09-10]cn[1-4].vaughan 


amdarc2.vaughan (not yet available)

Vaughan (login)

0 / 2

login2-vaughan.hpc.uantwerpen.be
(login currently disabled)

Leibniz

broadwell

broadwell_256

pascal_gpu

96 / 128

8 / 8

2 / 2

r1c[01-07,09-11]cn[1-4].leibniz
r2c[01-11]cn[1-4].leibniz
r3c[03-04,06]cn[1-4].leibniz

r0c[01-02]cn[1-4].leibniz

nvpa[1-2].leibniz

Leibniz (login)

1 / 2

login2-leibniz.hpc.uantwerpen.be

Breniac

skylake

11 / 23

r1c01cn[2-4].breniac
r1c[02,06]cn[1-4].breniac

Breniac (login)

1 / 2

login9.hpc.uantwerpen.be

Support questions (related to the upgrade)

For questions and problems related to the upgrade of and the migration to Rocky Linux 9, please contact us via e-mail: hpc@uantwerpen.be.

Please begin your subject with "rocky9: " and include enough information to help us help you (e.g., software and toolchain used, work directory, job id, output/error log files, …).

We hope you appreciate our efforts, and we thank you for your help in making the upgrade work!

— The CalcUA team