In-Memory Computing


Book Description

This book describes a comprehensive approach for synthesis and optimization of logic-in-memory computing hardware and architectures using memristive devices, which creates a firm foundation for practical applications. Readers will get familiar with a new generation of computer architectures that potentially can perform faster, as the necessity for communication between the processor and memory is surpassed. The discussion includes various synthesis methodologies and optimization algorithms targeting implementation cost metrics including latency and area overhead as well as the reliability issue caused by short memory lifetime. Presents a comprehensive synthesis flow for the emerging field of logic-in-memory computing; Describes automated compilation of programmable logic-in-memory computer architectures; Includes several effective optimization algorithm also applicable to classical logic synthesis; Investigates unbalanced write traffic in logic-in-memory architectures and describes wear leveling approaches to alleviate it.







In-/Near-Memory Computing


Book Description

This book provides a structured introduction of the key concepts and techniques that enable in-/near-memory computing. For decades, processing-in-memory or near-memory computing has been attracting growing interest due to its potential to break the memory wall. Near-memory computing moves compute logic near the memory, and thereby reduces data movement. Recent work has also shown that certain memories can morph themselves into compute units by exploiting the physical properties of the memory cells, enabling in-situ computing in the memory array. While in- and near-memory computing can circumvent overheads related to data movement, it comes at the cost of restricted flexibility of data representation and computation, design challenges of compute capable memories, and difficulty in system and software integration. Therefore, wide deployment of in-/near-memory computing cannot be accomplished without techniques that enable efficient mapping of data-intensive applications to such devices, without sacrificing accuracy or increasing hardware costs excessively. This book describes various memory substrates amenable to in- and near-memory computing, architectural approaches for designing efficient and reliable computing devices, and opportunities for in-/near-memory acceleration of different classes of applications.




A History of Modern Computing, second edition


Book Description

From the first digital computer to the dot-com crash—a story of individuals, institutions, and the forces that led to a series of dramatic transformations. This engaging history covers modern computing from the development of the first electronic digital computer through the dot-com crash. The author concentrates on five key moments of transition: the transformation of the computer in the late 1940s from a specialized scientific instrument to a commercial product; the emergence of small systems in the late 1960s; the beginning of personal computing in the 1970s; the spread of networking after 1985; and, in a chapter written for this edition, the period 1995-2001. The new material focuses on the Microsoft antitrust suit, the rise and fall of the dot-coms, and the advent of open source software, particularly Linux. Within the chronological narrative, the book traces several overlapping threads: the evolution of the computer's internal design; the effect of economic trends and the Cold War; the long-term role of IBM as a player and as a target for upstart entrepreneurs; the growth of software from a hidden element to a major character in the story of computing; and the recurring issue of the place of information and computing in a democratic society. The focus is on the United States (though Europe and Japan enter the story at crucial points), on computing per se rather than on applications such as artificial intelligence, and on systems that were sold commercially and installed in quantities.




Transactional Memory, 2nd Edition


Book Description

The advent of multicore processors has renewed interest in the idea of incorporating transactions into the programming model used to write parallel programs. This approach, known as transactional memory, offers an alternative, and hopefully better, way to coordinate concurrent threads. The ACI (atomicity, consistency, isolation) properties of transactions provide a foundation to ensure that concurrent reads and writes of shared data do not produce inconsistent or incorrect results. At a higher level, a computation wrapped in a transaction executes atomically - either it completes successfully and commits its result in its entirety or it aborts. In addition, isolation ensures the transaction produces the same result as if no other transactions were executing concurrently. Although transactions are not a parallel programming panacea, they shift much of the burden of synchronizing and coordinating parallel computations from a programmer to a compiler, to a language runtime system, or to hardware. The challenge for the system implementers is to build an efficient transactional memory infrastructure. This book presents an overview of the state of the art in the design and implementation of transactional memory systems, as of early spring 2010. Table of Contents: Introduction / Basic Transactions / Building on Basic Transactions / Software Transactional Memory / Hardware-Supported Transactional Memory / Conclusions




The Elements of Computing Systems


Book Description

This title gives students an integrated and rigorous picture of applied computer science, as it comes to play in the construction of a simple yet powerful computer system.




The Cache Memory Book


Book Description

The Second Edition of The Cache Memory Book introduces systems designers to the concepts behind cache design. The book teaches the basic cache concepts and more exotic techniques. It leads readers through someof the most intricate protocols used in complex multiprocessor caches. Written in an accessible, informal style, this text demystifies cache memory design by translating cache concepts and jargon into practical methodologies and real-life examples. It also provides adequate detail to serve as a reference book for ongoing work in cache memory design. The Second Edition includes an updated and expanded glossary of cache memory terms and buzzwords. The book provides new real world applications of cache memory design and a new chapter on cache"tricks". Illustrates detailed example designs of caches Provides numerous examples in the form of block diagrams, timing waveforms, state tables, and code traces Defines and discusses more than 240 cache specific buzzwords, comparing in detail the relative merits of different design methodologies Includes an extensive glossary, complete with clear definitions, synonyms, and references to the appropriate text discussions




In-Memory Computing Second Edition


Book Description

Does In-Memory Computing analysis show the relationships among important In-Memory Computing factors? Which In-Memory Computing goals are the most important? Is In-Memory Computing linked to key business goals and objectives? What prevents me from making the changes I know will make me a more effective In-Memory Computing leader? Who are the In-Memory Computing improvement team members, including Management Leads and Coaches? This exclusive In-Memory Computing self-assessment will make you the entrusted In-Memory Computing domain expert by revealing just what you need to know to be fluent and ready for any In-Memory Computing challenge. How do I reduce the effort in the In-Memory Computing work to be done to get problems solved? How can I ensure that plans of action include every In-Memory Computing task and that every In-Memory Computing outcome is in place? How will I save time investigating strategic and tactical options and ensuring In-Memory Computing costs are low? How can I deliver tailored In-Memory Computing advice instantly with structured going-forward plans? There's no better guide through these mind-expanding questions than acclaimed best-selling author Gerard Blokdyk. Blokdyk ensures all In-Memory Computing essentials are covered, from every angle: the In-Memory Computing self-assessment shows succinctly and clearly that what needs to be clarified to organize the required activities and processes so that In-Memory Computing outcomes are achieved. Contains extensive criteria grounded in past and current successful projects and activities by experienced In-Memory Computing practitioners. Their mastery, combined with the easy elegance of the self-assessment, provides its superior value to you in knowing how to ensure the outcome of any efforts in In-Memory Computing are maximized with professional results. Your purchase includes access details to the In-Memory Computing self-assessment dashboard download which gives you your dynamically prioritized projects-ready tool and shows you exactly what to do next. Your exclusive instant access details can be found in your book.




A Primer on Memory Consistency and Cache Coherence


Book Description

Many modern computer systems, including homogeneous and heterogeneous architectures, support shared memory in hardware. In a shared memory system, each of the processor cores may read and write to a single shared address space. For a shared memory machine, the memory consistency model defines the architecturally visible behavior of its memory system. Consistency definitions provide rules about loads and stores (or memory reads and writes) and how they act upon memory. As part of supporting a memory consistency model, many machines also provide cache coherence protocols that ensure that multiple cached copies of data are kept up-to-date. The goal of this primer is to provide readers with a basic understanding of consistency and coherence. This understanding includes both the issues that must be solved as well as a variety of solutions. We present both high-level concepts as well as specific, concrete examples from real-world systems. This second edition reflects a decade of advancements since the first edition and includes, among other more modest changes, two new chapters: one on consistency and coherence for non-CPU accelerators (with a focus on GPUs) and one that points to formal work and tools on consistency and coherence.




In-Memory Data Management


Book Description

In the last fifty years the world has been completely transformed through the use of IT. We have now reached a new inflection point. This book presents, for the first time, how in-memory data management is changing the way businesses are run. Today, enterprise data is split into separate databases for performance reasons. Multi-core CPUs, large main memories, cloud computing and powerful mobile devices are serving as the foundation for the transition of enterprises away from this restrictive model. This book provides the technical foundation for processing combined transactional and analytical operations in the same database. In the year since we published the first edition of this book, the performance gains enabled by the use of in-memory technology in enterprise applications has truly marked an inflection point in the market. The new content in this second edition focuses on the development of these in-memory enterprise applications, showing how they leverage the capabilities of in-memory technology. The book is intended for university students, IT-professionals and IT-managers, but also for senior management who wish to create new business processes.