The MPI (Message Passing Interface) is widely used for programming parallel computers ranging from multi-core laptops to large-scale SMP servers and clusters. This workshop is directed at current or prospective users of parallel computers who want to significantly improve the performance of their programs by “parallelizing” the code on a wide range of platforms.
The content of the course ranges from introductory to intermediate. After a brief introduction to MPI, we talk about MPI fundamentals including about a dozen MPI routines to familiarize users with the basic concepts of MPI programming. Later we discuss and demonstrate user-defined data types, array in memory, array distribution across processes, and task distribution with examples.
Instructor: Gang Liu, Centre for Advanced Computing, Queen's University.
Prerequisites: Basic FORTRAN or C/C++ programming.