NVIDIA Hopper Tensor Core GPU
We are glad to announce that, with generous funding support from the department, SCRP now offers access to NVIDIA Hopper datacenter GPUs.
For accounting reasons, Hopper GPUs are also named a100
on the cluster.
To specifically request Hopper GPUs, use the --constraint
option with
compute
/srun
/sbatch
commands:
compute
/srun
examples:
# compute
compute --gpus-per-task=a100 --constraint=hopper -c 16 --mem=160G python my_script.py
# srun
srun -p a100 --gpus-per-task=1 --constraint=hopper -c 16 --mem=160G python my_script.py
sbatch
example:
#!/bin/bash
#SBATCH --job-name=my_job
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=16
#SBATCH -p a100
#SBATCH --gpus-per-task=1
#SBATCH --constraint=hopper
...