Frontenac is a general-purpose high-performance computing (HPC) cluster hosted at Queen’s University in Kingston, Ontario. It is an excellent resource for researchers who require compute resources beyond what their local machine can handle. Frontenac enables Canadian researchers to conduct complex scientific simulations and analyses, as well as store and manage large datasets with ease. It features 4000+ cores, 20 GPU and 2.1PB of parallel filesystem storage.
Wide variety of pre-installed scientific software
Experienced, skilled staff to provide support and training
Designed with security safeguards to ensure data integrity, confidentiality, and availability
Compute, storage, and network optimized for scientific discovery and innovation
The standard way to access the Frontenac cluster is through ssh.
We are currently operating only one login node that is accessible from the outside through ssh. The IP address for this (CentOS Linux) login node is login.cac.queensu.ca.
If you want to obtain a new password for accessing the Frontenac system, or if you have forgotten your password and can't login to the system, please (re-)activate your account by following these steps:
SLURM is the scheduler used by the Frontenac cluster. Like Sun Grid Engine (the scheduler used for the M9000 and SW clusters), SLURM is used for submitting, monitoring, and controlling jobs on a cluster. Any jobs or computations done on the Frontenac cluster must be started via SLURM. Reading this tutorial will supply all the information necessary to run jobs on Frontenac.
The Frontenac cluster uses a shared GPFS filesystem for all file storage. User files are located under
/global/home of 500GB quota, shared project space under
/global/project, and network scratch space under
/global/scratch of 5TB quota. In addition to the network storage, compute nodes have between up to 1.5TB local hard disk for fast access to local scratch space by jobs using the location specified by the
$TMPDISK environment variable.
The Frontenac cluster is partitioned to enable the efficient and fair allocation of jobs. There are three main partitions: standard, reserved, sse3 (decommissioned).
Frontenac uses the Lmod module system as does the Compute Canada clusters and many other HPC clusters around the world.