Difference between revisions of "Frontenac:Fees"

From CAC Wiki
Jump to: navigation, search
("High-priority" and "Metered" Compute Access)
(Project and Nearline storage)
Line 117: Line 117:
* Used for data transactions free of charge (for registered users)
* Used for data transactions free of charge (for registered users)
=== Procedure ===

Revision as of 18:22, 28 February 2019

Fee Structure @ Frontenac

Frontenac serves as our main compute cluster and is operated through the SLURM scheduler. Until March 31, 2019 allocations from the 2018 Resource Allocation Competition ("RAC 2018") of Compute Canada, are running on this cluster. The cluster is not among the allocatable systems for the 2019 Compute Canada allocation round ("RAC 2019"). Therefore, the operation of Frontenac will be on a cost-recovery basis starting April 1, 2019. This page provides details about the fee structure.

Price List

The following lists the basic charges for compute and storage usage on the Frontenac cluster. These are meant as a reference to facilitate the decision of whether to continue to use the Frontenac cluster or seek alternatives.

Type Unit Price
Compute (CPU usage) $225 / core year
Compute (CPU usage, special arrangements) Contact us
Storage (Project) $250 / Terabyte-year
Storage (Nearline) $45 / Terabyte-year
Storage (special arrangements) Contact us

The prices quoted are for 2019 and subject to change. They do not include HST.

Compute and Storage

The new fee structure for the Frontenac Compute cluster applies to both to the usage of CPU's (GPU's) and storage on disk/tape. The fees are raised per annum, but can be pro-rated to a shorter duration without penalty. The standard units are :

Type Unit Explanation
CPU usage core-year
  • One core for the duration of one year.
  • The unit is not bound to a specific CPU but scheduled on any of the systems on the Frontenac cluster.
  • Associated memory and other specifics of the CPU varies. The quoted price is based on a 4GB/core ratio.
  • We are not charging for memory, but will use a standard memory-equivalent (4GB/core) when memory usage exceeds CPU usage.
Storage Terabyte-year
  • One terabyte of storage for the duration of one year.
  • Storage needs to be sized ahead of usage, and includes all project areas (home, scratch, project).
  • Different rates apply for disk (project) storage and tape storage with HSM access (nearline).
  • A small amount of "home" space for usage with CPU is included in the fees.

Metered Compute Access

There are two standard types of access to the Frontenac cluster": "High-Priority" access which provides scheduled access to the cluster which will in most cases be "rapid" for smaller jobs, and "Metered" access which uses a standard priority that may entail longer waiting times but will only be charged according to actual usage. In addition we offer special arrangements. Here is a more detailed explanation:

Type Explanation
Metered Compute Access
  • Access entitles to user to a priority proportional to the number of core-years purchased.
  • Continuous usage results in the purchased number of core-years.
  • Overall usage is capped at the number of core-years purchased.
  • Unused portions of the purchase can be "rolled-over" to a second year, after which they expire.
  • Users will be notified when 80% usage is reached, and given the option to purchase further resources.
  • An automatic "top-up" option exists.
Special arrangements
  • The CAC is open to special arrangements for short-term or long-term projects.
  • Such arrangement may include dedicated servers for a duration, contributed systems, and others.

Project and Nearline storage

There are two standard types of storage on the Frontenac file system, both part of the "Hierarchical Storage Management" system. "Project" storage refers to storage immediately accessible on a disk through the GPFS file system. "Nearline" storage refers to data that reside on tape, but are accessible through disk when needed, albeit with a delay. Here is a more detailed explanation:

Type Explanation
  • Used for frequently used, "active" data
  • Data reside on disk
  • Standard areas are : /global/home, /global/project
  • Access is immediate (at the speed of the GPFS system)
  • Home and project are backed up, scratch is not
  • The /project space is shared among members of a group, /home and /scratch are individual
  • Used for infrequently used, "passive" data
  • Data reside on tape, with "stubs" on disk
  • standard areas are : /global/home (individual), /global/project (shared)
  • access requires (automatic) retrieval to disk and entails delays depending on data size
  • backup policy the same as for project data
  • not suitable for IO during program runs or data analysis
Intermediate data
  • Data reside on global or local disk
  • Subject to periodic purges
  • Standard areas are : /global/scratch, /lscratch, /tmp
  • Used for data transactions free of charge (for registered users)