Hardware:Frontenac

From CAC Wiki
Jump to: navigation, search

The Frontenac cluster is CAC's newest compute cluster. It features a new set of hardware, a new network configuration, a new scheduler, a new software module system, a new OS, and a new set of compilers and related software. This page is intended to give an overview of its capabilities and provide a migration guide for new users. Please note that user accounts and data are *not* shared between Frontenac and the SW cluster, although you may request that your data is copied over.

Hardware

The Centre for Advanced Computing operates a cluster of X86 based multicore machines running Linux.This page explains essential features of this cluster and is meant as a basic guide for its usage.

Frontenac Cluster Nodes
Host CPU model Speed Cores Threads Memory
cac016 Xeon E7-4860 2.3GHz 40 80 256 GB
cac017 Xeon E7-4860 2.3GHz 40 80 256 GB
cac018 Xeon E7-4860 2.3GHz 40 80 256 GB
cac019 Xeon E7-4860 2.3GHz 40 80 1 TB
Software (SW) Frontenac Cluster nodes
cac019 E7-4860 2.3 GHz 40 80 256 GB
cac020 E7-8870 2.3 GHz 80 160 512 GB
cac021 E7-8870 2.3 GHz 80 160 512 GB
cac022 E7-8870 2.3 GHz 80 160 512 GB
cac023 E7-8870 2.3 GHz 80 160 512 GB
cac024 E7-8870 2.3 GHz 80 160 512 GB
cac025 E7-4830 v3 2.6 GHz 48 96 1 TB
cac026 E7-4830 v3 2.6 GHz 48 96 1 TB
cac027 E7-8850 v2 2.3 GHz 48 96 256 GB
cac028 E7-8867 v3 2.5 GHz 128 256 2 TB
cac029 E7-8867 v3 2.5 GHz 128 256 2 TB
cac030 E7-8867 v3 2.5 GHz 128 256 2 TB
cac031 E7-8867 v4 2.3 GHz 144 288 1 TB
cac032 E7-8867 v3 2.5 GHz 128 256 2 TB
cac033 E7-8867 v3 2.5 GHz 128 256 2 TB
cac034 E5-2650 v4 2.7 GHz 24 256 GB
cac035 E5-2650 v4 2.7 GHz 24 256 GB
cac036 E5-2650 v4 2.7 GHz 24 256 GB
cac037 E5-2650 v4 2.7 GHz 24 256 GB
cac038 E5-2650 v4 2.7 GHz 24 256 GB
cac039 E5-2650 v4 2.7 GHz 24 256 GB
cac040 E5-2650 v4 2.7 GHz 24 256 GB
cac041 E5-2650 v4 2.7 GHz 24 256 GB
cac042 E5-2650 v4 2.7 GHz 24 256 GB
cac043 E5-2650 v4 2.7 GHz 24 256 GB
cac044 E5-2650 v4 2.7 GHz 24 256 GB
cac045 E5-2650 v4 2.7 GHz 24 256 GB
cac046 E5-2650 v4 2.7 GHz 24 256 GB
cac047 E5-2650 v4 2.7 GHz 24 256 GB
cac048 E5-2650 v4 2.7 GHz 24 256 GB
cac049 E5-2650 v4 2.7 GHz 24 256 GB
cac050 E5-2650 v4 2.7 GHz 24 256 GB
cac051 E5-2650 v4 2.7 GHz 24 256 GB
cac052 E5-2650 v4 2.7 GHz 24 256 GB
cac053 E5-2650 v4 2.7 GHz 24 256 GB
cac054 E5-2650 v4 2.7 GHz 24 256 GB
cac055 E5-2650 v4 2.7 GHz 24 256 GB
cac056 E5-2650 v4 2.7 GHz 24 256 GB
cac057 E5-2650 v4 2.7 GHz 24 256 GB
cac058 E5-2650 v4 2.7 GHz 24 256 GB
cac059 E5-2650 v4 2.7 GHz 24 256 GB
cac060 E5-2650 v4 2.7 GHz 24 256 GB
cac061 E5-2650 v4 2.7 GHz 24 256 GB
cac062 E5-2650 v4 2.7 GHz 24 256 GB
cac063 E5-2650 v4 2.7 GHz 24 256 GB
cac064 E5-2650 v4 2.7 GHz 24 256 GB
cac065 E5-2650 v4 2.7 GHz 24 256 GB
cac066 E5-2650 v4 2.7 GHz 24 256 GB
cac067 E5-2650 v4 2.7 GHz 24 256 GB
cac068 E5-2650 v4 2.7 GHz 24 256 GB
cac069 E5-2650 v4 2.7 GHz 24 256 GB
cac070 E5-2650 v4 2.7 GHz 24 256 GB
cac071 E5-2650 v4 2.7 GHz 24 256 GB
cac072 E5-2650 v4 2.7 GHz 24 256 GB
cac073 E5-2650 v4 2.7 GHz 24 256 GB
cac074 E5-2650 v4 2.7 GHz 24 256 GB
cac075 E5-2650 v4 2.7 GHz 24 256 GB
cac076 E5-2650 v4 2.7 GHz 24 256 GB
cac077 E5-2650 v4 2.7 GHz 24 256 GB
cac078 E5-2650 v4 2.7 GHz 24 256 GB
cac079 E5-2650 v4 2.7 GHz 24 256 GB
cac080 E5-2650 v4 2.7 GHz 24 256 GB
cac081 E5-2650 v4 2.7 GHz 24 256 GB
cac082 E5-2650 v4 2.7 GHz 24 256 GB
cac083 E5-2650 v4 2.7 GHz 24 256 GB
cac084 E5-2650 v4 2.7 GHz 24 256 GB
cac085 E5-2650 v4 2.7 GHz 24 256 GB
cac086 E5-2650 v4 2.7 GHz 24 256 GB
cac087 E5-2650 v4 2.7 GHz 24 256 GB
cac088 E5-2650 v4 2.7 GHz 24 256 GB
cac089 E5-2650 v4 2.7 GHz 24 256 GB
cac090 E5-2650 v4 2.7 GHz 24 256 GB
cac091 E5-2650 v4 2.7 GHz 24 256 GB
cac092 E5-2650 v4 2.7 GHz 24 256 GB
cac093 E5-2650 v4 2.7 GHz 24 256 GB
cac094 E5-2650 v4 2.7 GHz 24 256 GB
cac095 E5-2650 v4 2.7 GHz 24 256 GB
cac096 E5-2650 v4 2.7 GHz 24 256 GB
cac097 E5-2650 v4 2.7 GHz 24 256 GB
cac098 E5-2650 v4 2.7 GHz 24 256 GB
cac099 E5-2650 v4 2.7 GHz 24 256 GB
cac100 E5-2650 v4 2.7 GHz 24 256 GB
cac101 E5-2650 v4 2.7 GHz 24 256 GB
cac102 E5-2650 v4 2.7 GHz 24 256 GB
cac103 E5-2650 v4 2.7 GHz 24 256 GB
cac104 E5-2650 v4 2.7 GHz 24 256 GB
cac105 E5-2650 v4 2.7 GHz 24 256 GB
cac106 E7-4850 v4 2.7 GHz 64 128 1 TB
cac107 Xeon Gold 6130 2.1 GHz 32, 3 GPU 64 175 GB
cac108 Xeon Gold 6130 2.1 GHz 32, 3 GPU 64 175 GB
sno019 Intel Xeon X5675 3 GHz 12 24 64 GB
sno020 Intel Xeon X5675 3 GHz 12 24 64 GB
sno021 Intel Xeon X5675 3 GHz 12 24 64 GB
sno022 Intel Xeon X5675 3 GHz 12 24 64 GB
sno023 Intel Xeon X5675 3 GHz 12 24 64 GB
sno024 Intel Xeon X5675 3 GHz 12 24 64 GB
sno025 Intel Xeon X5675 3 GHz 12 24 64 GB
sno026 Intel Xeon X5675 3 GHz 12 24 64 GB
sno027 Intel Xeon X5675 3 GHz 12 24 64 GB
sno028 Intel Xeon X5675 3 GHz 12 24 64 GB
sno030 Intel Xeon X5675 3 GHz 12 24 64 GB
sno031 Intel Xeon X5675 3 GHz 12 24 64 GB
sno032 Intel Xeon X5675 3 GHz 12 24 64 GB
sno033 Intel Xeon X5675 3 GHz 12 24 64 GB
sno034 Intel Xeon X5675 3 GHz 12 24 64 GB
sno035 Intel Xeon X5675 3 GHz 12 24 64 GB
sno036 Intel Xeon X5675 3 GHz 12 24 64 GB
sno037 Intel Xeon X5675 3 GHz 12 24 64 GB
sno038 Intel Xeon X5675 3 GHz 12 24 64 GB
sno039 Intel Xeon X5675 3 GHz 12 24 64 GB
sno040 Intel Xeon X5675 3 GHz 12 24 64 GB
sno041 Intel Xeon X5675 3 GHz 12 24 64 GB
sno042 Intel Xeon X5675 3 GHz 12 24 64 GB
sno043 Intel Xeon X5675 3 GHz 12 24 64 GB
sno044 Intel Xeon X5675 3 GHz 12 24 64 GB
sno045 Intel Xeon X5675 3 GHz 12 24 64 GB
sno046 Intel Xeon X5675 3 GHz 12 24 64 GB
sno047 Intel Xeon X5675 3 GHz 12 24 64 GB

Documentation

Quickstart

For those who want to just log on and get started with the new system, the bare essentials are shown below.

Logging on

Login to the Frontenac cluster is via SSH access only. You will need an SSH client like Terminal on Linux/macOS or MobaXterm on Windows. To log on to the cluster, execute the following command in your SSH client of choice:

ssh -X yourUserName@login.cac.queensu.ca

The first time you log on, you will be prompted to accept this server's RSA key (d0:9f:e9:e2:b0:fe:6b:56:bb:74:46:c5:fb:89:a4:41). Type "yes" to proceed, then enter your password normally. No characters appear while typing your password.

Filesystems

The Frontenac cluster uses a shared GPFS filesystem for all file storage. User files are located under /global/home, shared project space under /global/project, and network scratch space under /global/scratch. In to network storage, each compute node has a 1.5TB local hard disk for fast access to local scratch space by jobs using the location specified by the $TMPDISK environment variable.

Submitting jobs

Frontenac uses the SLURM scheduler instead of Sun Grid Engine. The sbatch command is used to submit jobs, squeue can be used to check the status of jobs, and scancel can be used to kill a job. For users looking to get started with SLURM as fast as possible, a minimalist template job script is shown below:

#!/bin/bash
#SBATCH -c num_cpus                        # Number of CPUS requested. If omitted, the default is 1 CPU.
#SBATCH --mem=megabytes                    # Memory requested in megabytes. If omitted, the default is 1024 MB.
#SBATCH -t days-hours:minutes:seconds      # How long will your job run for? If omitted, the default is 3 hours.

# some demo commands to use as a test
echo 'starting test job...'
sleep 120
echo 'our job worked!'

Assuming our job is called test-job.sh, we can submit it with sbatch test-job.sh. Detailed documentation can be found on our SLURM documentation page. One final thing to note is that it is possible to submit an interactive job with srun --x11 --pty bash. This starts a personal bash shell on a node with resources available.

Accounts, Allocations, Partitions

Please check out our helpfile about allocations on the Frontenac Cluster