- 1 Abaqus
This is a short help file on using the Finite-Element Analysis (FEA) code "Abaqus" on our machines. This software is only licensed for academic researchers who work at a university that is already covered by an Abaqus license. The software is only made available to persons who belong to a specific Unix group. See details below.
This file documents the usage of the software on both the "old" SW cluster and the "new" Frontenac cluster. Please make sure you consult the sections for the right platform
What is Abaqus ?
The ABAQUS suite of software for finite element analysis (FEA) has the ability to solve a wide variety of simulations. The ABAQUS suite consists of three core products - ABAQUS/Standard, ABAQUS/Explicit and ABAQUS/CAE.
The most recent version of Abaqus on our systems is Abaqus 2017. Earlier versions are available.
All versions of the Abaqus package are located in the directory /opt/abaqus (on SW cluster, swlogin1) or /global/software/abaqus (on Frontenac, caclogin01).
Access and Licensing
This software is only available to our users working at a university that is already covered by a license. Since our license does count as a license for Queen's University, this software can be used by our Queen's users without stipulation. To use it, you are required to read through the Abaqus Licensing Policy, and sign a statement. Then you will be added to a Unix group "abaqus", which enables you to run the software. Contact us if you are in doubt of whether you will be able to run Abaqus on our system.
The Abaqus license is "token limited". At present, there are the following licensing limits on our systems. Different components of the software different numbers of tokens for execution. The number of tokens is approximately equal to the number of processors employed. Currently our license supports
150 process tokens
Setup through the "module" command:
module load abaqus
will add the proper directories to the PATH and enable using the software. The version this is currently setting up is the current "2017".
The following instructions assume that you are a member of the Unix group "abaqus". They pertain only to the Standard and Explicit components of the software. The instructions in this section are only useful if you want to run an interactive test job of Abaqus on the workup node. If you want to run a production job, please refer to to instructions on how to start a Abaqus batch job (see next section).
The Abaqus program uses a sophisticated syntax to set up a job run. Instructions to the program are written into an input file which is specified when the program is evoked. While an input file can be written "from scratch", it is also possible to use the ABAQUS/CAE component to generate such a file. Both techniques are outside the scope of this FAQ. You also can have a look at a simple example input file here . Documentation for Abaqus is extensive, and available both electronically and in print. There is no substitute to consulting it.
Assuming that we have an input file called testsys.inp, we can initiate a run:
abaqus job=test001 inp=testsys.inp scratch=/scratch/hpcXXXX (on SW) abaqus job=test001 inp=testsys.inp scratch=/global/scratch/hpcXXXX (on Frontenac)
The job= option specified what the output files are to be called. They have various different "filename extensions" but share the name specified here (in our case test001). With the inp= option, we specify which input file to use. There are more options, such as cpus= and mp_mode= for running parallel jobs, but the two used above should get a simple serial job running.
The above sequence starts the job in the background, i.e. after an initial setup phase, your terminal returns although the job is still running. If you want to avoid this, you can include the interactive option in the command line.
The Abaqus software uses a directory in /tmp (which is local to the nodes on which the software is executing) as scratch space. This is the default setting and causes some Abaqus jobs to fail. It must therefore be changed to the standard scratch space /scratch/hpcXXXX on SW or /global/scratch/hpcXXXX on Frontenac, where XXXX being the numbers in your username. This can be done by including the option scratch=/opt/hpcXXXX or scratch=/global/scrach/hpcXXXX in your command line.
Also, do not forget to occasionally check the contents of this scratch directory by typing (replacing XXXX with the proper numbers for your username):
ls -lt /global/scratch/hpcXXXX
and removing any files that might be left over from old Abaqus runs. This is necessary because Abaqus will not remove these files if a job was terminated before it ran to completion.
More about changing the Abaqus environment may be learned from the "Installation and Licensing Guide" (chapter 4) of the Abaqus documentation. Please contact us if you need assistance.
Parallel production (batch) runs
In most cases, you will run Abaqus in batch mode. Most interactive work can be done elsewhere, whereas the computationally intensive runs are executed on the cluster. Production jobs are submitted on the systems via a scheduler.
Note that the usage of the scheduler for all production jobs is mandatory. Production jobs that are submitted outside of the load balancing software will be terminated by the system administrator.
The Abaqus jobs that you will want to run on our machines are likely to be quite large. To utilize parallelism, Abaqus offers to execute the solver on several CPU's simultaneously.
The Abaqus software achieves a certain degree of parallel scaling using either shared- or distributed memory machines. Here is a list of operations with the corresponding parallel mode that Abaqus supports:
Note that only the shared-memory parallelism is in use on our clusters. It is necessary to decide before a parallel Abaqus run how many processes are to be started.
Production jobs must be submitted via the SLURM scheduler. To obtain details about the scheduler, read our SLURM help file. For an Abaqus batch job, this means that rather than issuing the commands directly, you wrap them into a batch script that looks similar to this:
#!/bin/bash #SBATCH --job-name=Abaqus_Test #SBATCH --mail-type=ALL #SBATCH --mail-user=myEmail@whatever.com #SBATCH -o STD.out #SBATCH -e STD.err #SBATCH -N 1 #SBATCH -n 1 #SBATCH -c 4 #SBATCH -t 05:00 #SBATCH --mem=1000 module load abaqus abaqus job=test input=testsys.inp cpus=$SLURM_CPUS_PER_TASK mp_mode=threads scratch=/global/scratch/hpcXXXX -interactive
The script (let's call it ababus_test.sh) needs to be altered to fit the specifics.
The --mail-type and --mail-user lines set up for email notification at beginning and end of a run. -o and -e are meant to specify a file name to capture "standard output" and "standard error", i.e. the information that would be sent to the screen in an interactive run.
The -t option indicates a time limit. Choose it such that the job will have time to finish as it will be terminated when the time limit is reached. Note that this must be specified as it otherwise set to a (likely too short) default value. Don't set it too long either, as that will make the job hard to schedule. The same hold for the --mem option specifying an upper limit for memory usage. Jobs that use more than this limit will terminate. Choose it such that the job "fits" but don't high-ball it too much, as this will make the job hard or impossible to schedule.
Keep the "-n" and "-N" options at 1 to indicate that one main process is running on a single node.
The -c option specifies the number of processors (cores) that are requested. The value of the variable SLURM_CPUS_PER_TASK is set to the value specified in this line. "mp_mode=threads" enables shared-memory parallel execution. Note that all processors are allocated on a single node.
After altering the script appropriately, it is submitted by
Because of the limited scaling capabilities, only 12-processor jobs should be run. Also, the total memory of each node is limited, meaning that jobs with large memory requirements should request memory up-front to avoid getting scheduled on too small a node. Please contact us if you need help with this.
Migration SW -> Frontenac
The following is a list of differences concerning Abaqus usage on the SW cluster vs the new Frontenac cluster. The main impact comes from the different scheduler.
For Queen's Users : Installing Abaqus on your local PC
Our Abaqus license counts as a floating site license for Queen's University. As a result, our users can access the license from other locations on Queen's Campus. This only works for fixed machines on the Queen's network. Here are instructions on how to install Abaqus on a Queen's PC. Note that we will make the software available only to our Queen's users and that the license server is being monitored. Usage falling outside of the licensing terms is prohibited and will be investigated if detected.
Abaqus is a very complex software package, and requires some practice to be used efficiently. We can't explain it use in any detail here.
/opt/abaqus/2017/SIMULIA2017doc/English/DSSIMULIA_Established.htm (SW cluster) /global/software/abaqus/2017/SIMULIA2017doc/English/DSSIMULIA_Established.htm (Frontenac)
Start firefox locally. For security, we do not have a web server running on the cluster.