Galvani Slurm
Galvani can be accessed through a dedicated set of login nodes:
134.2.168.43
Available Partitions on Galvani
Queues and limits are subject to change.
Queue Name | Limits | Resources per node | Cost | Description |
---|---|---|---|---|
cpu-galvani |
3d | 32 CPUs 1228 GB RAM |
CPU=0.218, Mem=0.0126G |
|
a100-galvani |
3d | 32 cores 1024 GB RAM 8 A100 |
CPU=1.406, Mem=0.1034G, gres/gpu=11.25 |
GPU nodes with 8x A100 |
2080-galvani |
3d | 72 CPUs 384 GB RAM 8 2080Ti |
CPU=0.278, Mem=0.0522G, gres/gpu=2.5 |
GPU nodes with 8x 2080Ti |
2080-preemptable-galvani |
3d | - | same as gpu-2080ti | CPU=0.0695, Mem=0.01305G, gres/gpu=0.625 |
Terms
* MaxJobsPU: Maximum number of jobs each user is allowed to run at one time.
* MaxJobsPA: Maximum number of jobs each account/group is allowed to run at one time.
* MaxSubmitPU: Maximum number of jobs pending or running state at any time per user.
* MaxSubmitPA: Maximum number of jobs pending or running state at any time per account/group.
Cuda Kernels compilation
There are several development tools installed on Galvani, to see them:
scl --list
which should output:
gcc-toolset-10
gcc-toolset-11
gcc-toolset-9
To source and enable them:
source scl_source enable <devtoolset-x>
CUDA on Compute Nodes
You can find different cuda versions on compute node under:
ls -1 /usr/local/cuda* -d
/usr/local/cuda-11
/usr/local/cuda-11.7
/usr/local/cuda-11.8
/usr/local/cuda-12
/usr/local/cuda-12.1
Submitting Jobs to Galvani Slurm
To understand how to submit jobs, please refer to the Slurm explanation.