- Logging on to the system
- List of installed software and how to use it
- Storage and filesystems
- Submitting jobs using SLURM
- SLURM accounting and special job submission
Login to the Frontenac cluster is via SSH access only. You will need an SSH client like Terminal on Linux/macOS or MobaXterm on Windows. To log on to the cluster, execute the following command in your SSH client of choice:
ssh -X yourUserName@login.cac.queensu.ca
The first time you log on, you will be prompted to accept this server's RSA key (
d0:9f:e9:e2:b0:fe:6b:56:bb:74:46:c5:fb:89:a4:41). Type "yes" to proceed, then enter your password normally. No characters appear while typing your password.
The Frontenac cluster uses a shared GPFS filesystem for all file storage. User files are located under
/global/home, shared project space under
/global/project, and network scratch space under
/global/scratch. In to network storage, each compute node has a 1.5TB local hard disk for fast access to local scratch space by jobs using the location specified by the
$TMPDISK environment variable.
Frontenac uses the SLURM scheduler instead of Sun Grid Engine. The
sbatch command is used to submit jobs,
squeue can be used to check the status of jobs, and
scancel can be used to kill a job. For users looking to get started with SLURM as fast as possible, a minimalist template job script is shown below:
#!/bin/bash #SBATCH -c num_cpus # Number of CPUS requested. If omitted, the default is 1 CPU. #SBATCH --mem=megabytes # Memory requested in megabytes. If omitted, the default is 1024 MB. #SBATCH -t days-hours:minutes:seconds # How long will your job run for? If omitted, the default is 3 hours. # some demo commands to use as a test echo 'starting test job...' sleep 120 echo 'our job worked!'
Assuming our job is called
test-job.sh, we can submit it with
sbatch test-job.sh. Detailed documentation can be found on our SLURM documentation page. One final thing to note is that it is possible to submit an interactive job with
srun --x11 --pty bash. This starts a personal bash shell on a node with resources available.