The SW (Linux) Cluster
The Centre for Advanced Computing operates a cluster of X86 based multicore machines running Linux.This page explains essential features of this cluster and is meant as a basic guide for its usage.
Type of Hardaware
Our cluster consists of multiple X86 multicore nodes made by Dell and IBM (both based on Intel x5670 or E7-4860). All nodes run CentOS Linux and share a file system. Access is handled by Grid Engine. The server nodes are called sw0011...sw0054.
Why these Systems?
The main emphasis in these systems is a high floating-point performance for a modest number of processes / threads. Since commercial software such as Fluent and Abaqus is increasingly focussed on support for Linux only, this cluster was acquired to continue to offer recent versions of these software packages. In addition, the higher single-core performance of these nodes (compared to the Sparc/Solaris based M9000 cluster, for instance), allows for a more efficient use of license seats which usually a priced per-core.
Who Should Use This Cluster?
The software cluster runs on the Linux operating system, and should therefore only be used if the software cannot be compiled or run on the Sparc/Solaris platform. Runs that require more than 64 Gbyte of memory should be performed on the M9000 cluster unless the program is parallelized using MPI with distributed memory and very low communication requirements.
We suggest you consider using this compute cluster if
This cluster might not be suitable if
If you think your application could run more efficiently on these machines, please contact us (email@example.com) to discuss any concerns and let us assist you in getting started.
Note that on these cluster (as on the M9000's), we have to enforce dedicated cores or CPUs to avoid sharing and context switching overheads. No "overloading" can be allowed.
How Do I Use This Cluster?
... to access
The Secure Portal offers a direct link called xterm (linux login node). This link connects via a terminal to swlogin1 which is designated as a login/workup node for the cluster. If encounter issues with the portal login please let us know. Meanwhile, it is possible to "ssh" directly from sflogin0 to swlogin1 by typing
- ssh sw0010
and re-typing your system password.
The file systems for all of our clusters are shared, so you will be using the same home directory as when you are using the M9000 servers or the standard login node sfnode0. swlogin1 can be used for compilation, program development, and testing only, not for production jobs.
Since the SW cluster has a completely different architecture than the M9000 Servers code must be re-compiled when migrating to this cluster. The compiler that we are using on this cluster is the Intel Compiler Suite. This includes compilers for Fortran, C, and C++, as well as MPI and OpenMP support, debuggers and development suite. This software resides in /opt/ics and is only visible to the Linux cluster. The versions are:
- Fortran (ifort): Intel(R) Fortran Intel(R) 64 Compiler XE for applications running on Intel(R) 64, Version 12.1 Build 20110811
- C (icc): Intel(R) C Intel(R) 64 Compiler XE for applications running on Intel(R) 64, Version 12.1 Build 20110811
- C++ (icpc): Intel(R) C++ Intel(R) 64 Compiler XE for applications running on Intel(R) 64, Version 12.1 Build 20110811
This compiler suite needs to be activated before use. The command is
- use ics
In many cases, especially when software from the public domaine is involved, the preferable compilers are the gnu C/C++/Fortran compilers. The system version of these is:
Using built-in specs. Target: x86_64-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk --disable-dssi --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-126.96.36.199/jre --enable-libgcj-multifile --enable-java-maintainer-mode --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC)
No special activation is needed to use these, as they reside in a system director. A newer version of this compiler set is available in /opt/gcc-4.8.3 and can be access using the command
- use gcc-4.8.3
For applications that cannot be re-compiled (for instance, because the source code is not accessible), a pre-compiled Linux version (x64 for Redhat will do the trick) needs to be obtained.
... to run jobs
As mentioned earlier, program runs for user and application software on the login node are allowed only for test purposes or if interactive use is unavoidable. In the latter case, please get in touch to let us know what you need. Production jobs must be submitted through the Grid Engine load scheduler. For a description of how to use Grid Engine, see the HPCVL GridEngine FAQ.
Grid Engine will schedule jobs to a default pool of machines unless otherwise stated. This default pool contains presently only the M9000 nodes m9k0001-8. Therefore, you need to add the following two lines to your script for your job to be scheduled to the Linux SW cluster exclusively:
- #$ -q abaqus.q
- #$ -l qname=abaqus.q
The abaqus name for the queue that is added here derives from the initial software Abaqus that was (and still is) run on this cluster.
Note that your jobs will run on dedicated threads, i.e. typically up to 12 processes can be scheduled to a single node. The Grid Engine will do the scheduling, i.e. there is no way for the user to determine which processes run on which cores.
General information about using HPCVL facilities can be found in our FAQ pages. We also supply user support (please send email to firstname.lastname@example.org or contact us directly), so if you experience problems, we can assist you.