There is a VSC Wiki with basic information about the hardware, storage, and submission scripts:
https://wiki.vsc.ac.at/doku.php
Also some information on how to list and load available software can be found there.
To get an estimate when the job in the queue would possible be running, use the command:
squeue -j %JOBID% --start |
with %JOBID%
replaced with the job id of your job.
To be able to run jobs that occupy part of the node, make sure you don't specify -N 1
parameter, since that will automatically allocated the entire node. Use --ntasks
keyword to specify number of CPU cores to be used.
To load Intel 2021 compiler use:
module load compiler/latest |
To check quota (by default one has 100Gb per project):
mmlsquota --block-size auto -j home_fs70XXX home |
w2dynamics
can be loaded via:
module load --auto w2dynamics/master-gcc-12.2.0-ez4ak3p |
The access to the VASP
software is restricted: one needs to write the VSC support to get the access. To load VASP
use:
module load --auto vasp6/6.2.0-intel-19.1.3.304-tybhikr |
In addition to the usual:
export OMP_NUM_THREADS=1 |
which sets the number of OpenMP
threads to 1, it might be necessary to set a correct process pinning via:
export I_MPI_PIN_RESPECT_CPUSET=0 |
otherwise running mpi
jobs will be horrendously slow. However, as of the 7th of February 2023 this variable is automatically set to zero by module load
command.
In zen3
environment w2dynamics
can be loaded via:
module load --auto w2dynamics/master-gcc-12.2.0-p2uhfc4 |
There is an issue with UCX on the login nodes producing an error UCX WARN UCP version is incompatible, required: 1.12, actual: 1.10
, which should be resolved in the near future. Note that computation nodes suppose to have the correct version (source: Jan Zabloudil).
As for now, if you need to run it on the login node use:
module load ucx/1.12.1-gcc-11.2.0-udqocr2 |
An example w2dynamics
submission script can look as follows (please remember to modify the email address to yours and put the correct name of the parameters file in place of Parameters.in
as well as modify the number of nodes as needed – for two-particle calculation -N 10
or more is probably needed)
#!/bin/bash #SBATCH -N 1 #SBATCH -J square_dmft #SBATCH --ntasks-per-node=128 #SBATCH --qos=zen3_0512 #SBATCH --partition=zen3_0512 ##SBATCH --time=24:00:00 #SBATCH --mail-type=ALL # first have to state the type of event to occur #SBATCH --mail-user=<YOUR.NAME@tuwien.ac.at> # and then your email address module purge module load --auto w2dynamics/master-gcc-11.2.0-mqjex7z mpirun -np $SLURM_NTASKS DMFT.py Parameters.in > ctqmc-$SLURM_JOB_ID.log 2>&1 |
To be able to run wien2k
in parallel mode one needs to set up passwordless login for computational nodes:
1) generate the ssh key via
ssh-keygen |
(enter the empty passphrase).
2) authorize it:
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys |
3) create/modify the .ssh/config
file to have the following lines:
Host n* IdentityFile ~/.ssh/id_rsa Host localhost IdentityFile ~/.ssh/id_rsa |
4) change the permissions:
chmod 600 ~/.ssh/* |
To be able to run wien2k
on multiple nodes one would need to set up a .machines
file.
Here is an example utilizing 4 OpenMP threads and running 32 k-points per node. The list of nodes allocated by slurm
is obtained from the $SLURM_NODELIST
variable:
export OMP_NUM_THREADS=4 nodelist=$(scontrol show hostname $SLURM_NODELIST) > .machines for node in ${nodelist[@]}; do for (( j=1; j<=32; j++ )); do echo 1:$node >> .machines done done |
Table of contents