NVIDIA H100 Tensor Core GPU
We are glad to announce that, with generous funding support from the department, SCRP now offers access to NVIDIA H100 datacenter GPUs.
For accounting reasons, H100 GPUs are also named a100
on the cluster.
To specifically request H100 GPUs, use the --constraint
option with
compute
/srun
/sbatch
commands:
compute
/srun
examples:
# compute
compute --gpus-per-task=a100 --constraint=h100 -c 16 --mem=160G python my_script.py
# srun
srun -p a100 --gpus-per-task=1 --constraint=h100 -c 16 --mem=160G python my_script.py
sbatch
example:
#!/bin/bash
#SBATCH --job-name=my_job
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=16
#SBATCH -p a100
#SBATCH --gpus-per-task=1
#SBATCH --constraint=h100
...