Quickstart Guide for the Harlow cluster
30 standard compute nodes with:
This cluster is named for a giant in the field of computational fluid dynamics, Frank Harlow . It was commissioned Oct 3, 2018.
Access to the Harlow system requires a user account. See our computing page for details on how to request one. Access is only possible from within the GSU network.
Software and Environment
To manage the access to pre-installed software like compilers, libraries, pre- and postprocessing tools and further application software, Harlow uses the module command. This command offers the following functionality.
- Show lists of available software
- Access software in different versions
harlow:~ $ module avail ... intel/184.108.40.206 ... harlow:~ $ module load intel/220.127.116.11 harlow:~ $ module list Currently Loaded Modulefiles: ... intel/18.104.22.168 ...
Standard batch system jobs are executed applying the following steps:
- Provide (write) a batch job script, see the examples below.
- Submit the job script with the command sbatch.
- Monitor and control the job execution, e.g. with the commands squeue and scancel (to cancel the job).
MPI job scriptRequesting 4 nodes in the general4 partition with 16 cores (no hyperthreading possible) for 10 minutes, using MPI.
#!/bin/bash #SBATCH -J harlow_mpi_test #SBATCH -t 00:10:00 #SBATCH -N 4 #SBATCH --tasks-per-node 16 #SBATCH -o job%j.out # strout filename (%j is jobid) #SBATCH -e job%j.err # stderr filename (%j is jobid) module load intel/22.214.171.124 export SLURM_CPU_BIND=none mpirun -iface ib0 -env I_MPI_FAULT_CONTINUE=on -n $SLURM_NPROCS hello_world > hello.out
Hybrid MPI+OpenMP job scriptRequesting 2 nodes with 2 MPI tasks per node, and 8 OpenMP tasks per MPI task.
#!/bin/bash #SBATCH -J harlow_hyb_test #SBATCH -t 00:20:00 #SBATCH -N 2 #SBATCH --cpus-per-task=8 #SBATCH -o job%j.out # strout filename (%j is jobid) #SBATCH -e job%j.err # stderr filename (%j is jobid) # This binds each thread to one core export OMP_PROC_BIND=TRUE # Number of threads as given by -c / --cpus-per-task export OMP_NUM_THREADS=8 export KMP_AFFINITY=verbose,scatter mpiexec -iface ib0 -n 4 --perhost 1 ./hello_world > hello.out
|devel||12:00:00||30 on demand||high priority development tests, can pre-empt other jobs|
|production||24:00:00||30 on demand||normal queue for production of data for research|
|general8||24:00:00||maximum 8 nodes||test queue for medium jobs|
|general4||24:00:00||maximum 4 nodes||test queue for small jobs /class assignments|