Publisher's Synopsis
The parallel programming community recently organized an effort to standardize the communication subroutine libraries used for programming on massively parallel computers such as the Connection Machine and Cray's new T3D, as well as networks of workstations. The standard they developed, Message-Passing Interface (MPI), not only unifies within a common framework programs written in a variety of existing (and currently incompatible) parallel languages but allows for future portability of programs between machines. Three of the authors of MPI have teamed up here to present a tutorial on how to use MPI to write parallel programs, particularly for large-scale applications.;MPI, the long-sought standard for expressing algorithms and running them on a variety of computers, allows leveraging of software development costs across parallel machines and networks and will spur the development of a new level of parallel software. This book covers all the details of the MPI functions used in the motivating examples and applications, with many MPI functions introduced in context.;The topics covered include issues in portability of programs among MPP systems, examples and counterexamples illustrating subtle aspects of the MPI definition, how to write libraries that take advantage of MPI's special features, application paradigms for large-scale examples, complete program examples, visualizing program behaviour with graphical tools, an implementation strategy and a portable implementation, using MPI on workstation networks and on MPPs (Intel, Thinking Machines, IBM), scalability and performance tuning, and how to convert existing codes to MPI.