Parallel Computing on Distributed Memory Multiprocessors


Book Description

Advances in microelectronic technology have made massively parallel computing a reality and triggered an outburst of research activity in parallel processing architectures and algorithms. Distributed memory multiprocessors - parallel computers that consist of microprocessors connected in a regular topology - are increasingly being used to solve large problems in many application areas. In order to use these computers for a specific application, existing algorithms need to be restructured for the architecture and new algorithms developed. The performance of a computation on a distributed memory multiprocessor is affected by the node and communication architecture, the interconnection network topology, the I/O subsystem, and the parallel algorithm and communication protocols. Each of these parametersis a complex problem, and solutions require an understanding of the interactions among them. This book is based on the papers presented at the NATO Advanced Study Institute held at Bilkent University, Turkey, in July 1991. The book is organized in five parts: Parallel computing structures and communication, Parallel numerical algorithms, Parallel programming, Fault tolerance, and Applications and algorithms.




Introduction to Parallel Computing


Book Description

A complete source of information on almost all aspects of parallel computing from introduction, to architectures, to programming paradigms, to algorithms, to programming standards. It covers traditional Computer Science algorithms, scientific computing algorithms and data intensive algorithms.




Shared-Memory Parallelism Can be Simple, Fast, and Scalable


Book Description

Parallelism is the key to achieving high performance in computing. However, writing efficient and scalable parallel programs is notoriously difficult, and often requires significant expertise. To address this challenge, it is crucial to provide programmers with high-level tools to enable them to develop solutions easily, and at the same time emphasize the theoretical and practical aspects of algorithm design to allow the solutions developed to run efficiently under many different settings. This thesis addresses this challenge using a three-pronged approach consisting of the design of shared-memory programming techniques, frameworks, and algorithms for important problems in computing. The thesis provides evidence that with appropriate programming techniques, frameworks, and algorithms, shared-memory programs can be simple, fast, and scalable, both in theory and in practice. The results developed in this thesis serve to ease the transition into the multicore era. The first part of this thesis introduces tools and techniques for deterministic parallel programming, including means for encapsulating nondeterminism via powerful commutative building blocks, as well as a novel framework for executing sequential iterative loops in parallel, which lead to deterministic parallel algorithms that are efficient both in theory and in practice. The second part of this thesis introduces Ligra, the first high-level shared memory framework for parallel graph traversal algorithms. The framework allows programmers to express graph traversal algorithms using very short and concise code, delivers performance competitive with that of highly-optimized code, and is up to orders of magnitude faster than existing systems designed for distributed memory. This part of the thesis also introduces Ligra+, which extends Ligra with graph compression techniques to reduce space usage and improve parallel performance at the same time, and is also the first graph processing system to support in-memory graph compression. The third and fourth parts of this thesis bridge the gap between theory and practice in parallel algorithm design by introducing the first algorithms for a variety of important problems on graphs and strings that are efficient both in theory and in practice. For example, the thesis develops the first linear-work and polylogarithmic-depth algorithms for suffix tree construction and graph connectivity that are also practical, as well as a work-efficient, polylogarithmic-depth, and cache-efficient shared-memory algorithm for triangle computations that achieves a 2–5x speedup over the best existing algorithms on 40 cores. This is a revised version of the thesis that won the 2015 ACM Doctoral Dissertation Award.




UPC


Book Description

This is the first book to explain the language Unified Parallel C and its use. Authors El-Ghazawi, Carlson, and Sterling are among the developers of UPC, with close links with the industrial members of the UPC consortium. Their text covers background material on parallel architectures and algorithms, and includes UPC programming case studies. This book represents an invaluable resource for the growing number of UPC users and applications developers. More information about UPC can be found at: http://upc.gwu.edu/ An Instructor Support FTP site is available from the Wiley editorial department.




Parallel and Distributed Processing


Book Description

This volume contains the proceedings from the workshops held in conjunction with the IEEE International Parallel and Distributed Processing Symposium, IPDPS 2000, on 1-5 May 2000 in Cancun, Mexico. The workshopsprovidea forum for bringing together researchers,practiti- ers, and designers from various backgrounds to discuss the state of the art in parallelism.Theyfocusondi erentaspectsofparallelism,fromruntimesystems to formal methods, from optics to irregular problems, from biology to networks of personal computers, from embedded systems to programming environments; the following workshops are represented in this volume: { Workshop on Personal Computer Based Networks of Workstations { Workshop on Advances in Parallel and Distributed Computational Models { Workshop on Par. and Dist. Comp. in Image, Video, and Multimedia { Workshop on High-Level Parallel Prog. Models and Supportive Env. { Workshop on High Performance Data Mining { Workshop on Solving Irregularly Structured Problems in Parallel { Workshop on Java for Parallel and Distributed Computing { WorkshoponBiologicallyInspiredSolutionsto ParallelProcessingProblems { Workshop on Parallel and Distributed Real-Time Systems { Workshop on Embedded HPC Systems and Applications { Recon gurable Architectures Workshop { Workshop on Formal Methods for Parallel Programming { Workshop on Optics and Computer Science { Workshop on Run-Time Systems for Parallel Programming { Workshop on Fault-Tolerant Parallel and Distributed Systems All papers published in the workshops proceedings were selected by the p- gram committee on the basis of referee reports. Each paper was reviewed by independent referees who judged the papers for originality, quality, and cons- tency with the themes of the workshops.




Scalable Shared Memory Multiprocessors


Book Description

Mathematics of Computing -- Parallelism.




An Introduction to Parallel Programming


Book Description

An Introduction to Parallel Programming, Second Edition presents a tried-and-true tutorial approach that shows students how to develop effective parallel programs with MPI, Pthreads and OpenMP.As the first undergraduate text to directly address compiling and running parallel programs on multi-core and cluster architecture, this second edition carries forward its clear explanations for designing, debugging and evaluating the performance of distributed and shared-memory programs while adding coverage of accelerators via new content on GPU programming and heterogeneous programming. New and improved user-friendly exercises teach students how to compile, run and modify example programs. Takes a tutorial approach, starting with small programming examples and building progressively to more challenging examples Explains how to develop parallel programs using MPI, Pthreads and OpenMP programming models A robust package of online ancillaries for instructors and students includes lecture slides, solutions manual, downloadable source code, and an image bank New to this edition: New chapters on GPU programming and heterogeneous programming New examples and exercises related to parallel algorithms




Distributed Shared Memory


Book Description

The papers present in this text survey both distributed shared memory (DSM) efforts and commercial DSM systems. The book discusses relevant issues that make the concept of DSM one of the most attractive approaches for building large-scale, high-performance multiprocessor systems. The authors provide a general introduction to the DSM field as well as a broad survey of the basic DSM concepts, mechanisms, design issues, and systems. The book concentrates on basic DSM algorithms, their enhancements, and their performance evaluation. In addition, it details implementations that employ DSM solutions at the software and the hardware level. This guide is a research and development reference that provides state-of-the art information that will be useful to architects, designers, and programmers of DSM systems.




Scalable Shared-Memory Multiprocessing


Book Description

Dr. Lenoski and Dr. Weber have experience with leading-edge research and practical issues involved in implementing large-scale parallel systems. They were key contributors to the architecture and design of the DASH multiprocessor. Currently, they are involved with commercializing scalable shared-memory technology.