Please ensure Javascript is enabled for purposes of website accessibility
CAC Logo White

Traditional HPC

HPC icon

The Frontenac Cluster

Unleash the power of scientific discovery with Frontenac – an HPC Cluster built to drive innovation.

Frontenac is a general-purpose high-performance computing (HPC) cluster hosted at Queen’s University in Kingston, Ontario. It is an excellent resource for researchers who require compute resources beyond what their local machine can handle. Frontenac enables Canadian researchers to conduct complex scientific simulations and analyses, as well as store and manage large datasets with ease. It features 4000+ cores, 20 GPU and 2.1PB of parallel filesystem storage.

a snapshot of a server rack showing multiple wires connected to multiple devices

Key Features

Software

Wide variety of pre-installed scientific software

Skilled Staff

Experienced, skilled staff to provide support and training

Secure

Designed with security safeguards to ensure data integrity, confidentiality, and availability

Optimization

Compute, storage, and network optimized for scientific discovery and innovation

FAQ

The standard way to access the Frontenac cluster is through ssh.

We are currently operating only one login node that is accessible from the outside through ssh. The IP address for this (CentOS Linux) login node is login.cac.queensu.ca.

Read the Full FAQ on the Frontenac Cluster Wiki

Re-setting a new password when you have forgotten it

If you want to obtain a new password for accessing the Frontenac system, or if you have forgotten your password and can't login to the system, please (re-)activate your account by following these steps:

  • Visit Password Reset Portal and enter your username and email address. Note that the email address must be the one we have on record for you.
  • Don't forget to check the "I agree to the AUP" and "I'm not a robot" boxes before you hit the "Submit" button.
  • The system will send you an email with a link. Click on it.
  • You will be presented with a temporary password which you can use to log in.

Read the Full FAQ on the Frontenac Cluster Wiki

SLURM is the scheduler used by the Frontenac cluster. Like Sun Grid Engine (the scheduler used for the M9000 and SW clusters), SLURM is used for submitting, monitoring, and controlling jobs on a cluster. Any jobs or computations done on the Frontenac cluster must be started via SLURM. Reading this tutorial will supply all the information necessary to run jobs on Frontenac.

Read the Full FAQ on the Frontenac Cluster Wiki

The Frontenac cluster uses a shared GPFS filesystem for all file storage. User files are located under /global/home of 500GB quota, shared project space under /global/project, and network scratch space under /global/scratch of 5TB quota. In addition to the network storage, compute nodes have between up to 1.5TB local hard disk for fast access to local scratch space by jobs using the location specified by the $TMPDISK environment variable. 

Read the Full FAQ on the Frontenac Cluster Wiki

The Frontenac cluster is partitioned to enable the efficient and fair allocation of jobs. There are three main partitions: standardreservedsse3 (decommissioned).

Read the Full FAQ on the Frontenac Cluster Wiki

Frontenac uses the Lmod module system as does the Digital Research Alliance of Canada clusters and many other HPC clusters around the world.

Read the Full FAQ on the Frontenac Cluster Wiki