Programming Massively Parallel Processors


Book Description

Programming Massively Parallel Processors: A Hands-on Approach, Second Edition, teaches students how to program massively parallel processors. It offers a detailed discussion of various techniques for constructing parallel programs. Case studies are used to demonstrate the development process, which begins with computational thinking and ends with effective and efficient parallel programs. This guide shows both student and professional alike the basic concepts of parallel programming and GPU architecture. Topics of performance, floating-point format, parallel patterns, and dynamic parallelism are covered in depth. This revised edition contains more parallel programming examples, commonly-used libraries such as Thrust, and explanations of the latest tools. It also provides new coverage of CUDA 5.0, improved performance, enhanced development tools, increased hardware support, and more; increased coverage of related technology, OpenCL and new material on algorithm patterns, GPU clusters, host programming, and data parallelism; and two new case studies (on MRI reconstruction and molecular visualization) that explore the latest applications of CUDA and GPUs for scientific research and high-performance computing. This book should be a valuable resource for advanced students, software engineers, programmers, and hardware engineers. New coverage of CUDA 5.0, improved performance, enhanced development tools, increased hardware support, and more Increased coverage of related technology, OpenCL and new material on algorithm patterns, GPU clusters, host programming, and data parallelism Two new case studies (on MRI reconstruction and molecular visualization) explore the latest applications of CUDA and GPUs for scientific research and high-performance computing







A New Era in Computation


Book Description

The transition from serial to parallel computing in which many operations are performed simultaneously and at tremendous speed, marks a new era in computation. These original essays explore the emerging modalities and potential impact of this technological revolution. Daniel Hillis, inventor of the superfast Connection Machine®, provides a clear explanation of massively parallel computing. The essays that follow investigate the rich possibilities, as well as the constraints, that parallel computation holds for the future. These possibilities include its tremendous potential for simulating currently intractable physical processes and for solving "monster" scientific problems (involving new algorithms and ways of thinking about problem solving that will change the way we think about the world), and its use in the neural sciences (where the biological model for parallel computation is the brain). Essays also address the gap between the promise of this new technology and our current educational system and look at America's technological agenda for the 1990s. Daniel Hillis is Chief Scientist and James Bailey is Director of Marketing, both at Thinking Machines Corporation. Selected Essays:Preface,Stephen R. Graubard. What is Massively Parallel Computing, and Why Is It Important?W. Daniel Hillis. Complex Adaptive Systems, John H. Holland. Perspectives on Parallel Computing, Yuefan Deng, James Glimm, David H. Sharp. Parallel Billiards and Monster Systems, Brosl Hasslacher. First We Reshape Our Computers, Then Our Computers Reshape Us: The Broader Intellectual Impact of Parallelism, James Bailey. Parallelism in Conscious Experience. Robert Sokolowski. Of Time, Intelligence, and Institutions, Felix E. Browder. Parallel Computing and Education,Geoffrey C. Fox. The Age of Computing: A Personal Memoir, N. Metropolis. What Should the Public Know about Mathematics? Philip J. Davis. America's Economic-Technological Agenda for the 1990s,Jacob T. Schwartz. A Daedalus special issue




Programming Environments for Massively Parallel Distributed Systems


Book Description

Massively Parallel Systems (MPSs) with their scalable computation and storage space promises are becoming increasingly important for high-performance computing. The growing acceptance of MPSs in academia is clearly apparent. However, in industrial companies, their usage remains low. The programming of MPSs is still the big obstacle, and solving this software problem is sometimes referred to as one of the most challenging tasks of the 1990's. The 1994 working conference on "Programming Environments for Massively Parallel Systems" was the latest event of the working group WG 10.3 of the International Federation for Information Processing (IFIP) in this field. It succeeded the 1992 conference in Edinburgh on "Programming Environments for Parallel Computing". The research and development work discussed at the conference addresses the entire spectrum of software problems including virtual machines which are less cumbersome to program; more convenient programming models; advanced programming languages, and especially more sophisticated programming tools; but also algorithms and applications.