The MPI (Message Passing Interface) is widely used for programming parallel computers ranging from shared-memory servers to large clusters. This workshop is directed at current or prospective users of parallel computers who want to significantly improve the performance of their programs by “parallelizing” the code on a wide range of platforms.
The content of the course covers the basics of MPI programming. After a brief introduction to MPI, we talk about MPI fundamentals involving about a dozen MPI routines that are enough to familiarize users with the basic concepts of MPI programming. As it turns out these are enough to create well-scaling programs for some simple applications. If time allows, we discuss and demonstrate user-defined data types, array distribution across processes, and task distribution with examples.
Throughout this workshop we will perform simple exercises on a dedicated cluster to apply our newly gained knowledge practically.
Instructor: Hartmut Schmider, Centre for Advanced Computing, Queen's University.
Prerequisites: Basic FORTRAN or C/C++ programming.