Distributed Shared Memory Consistency Models


Book Description

This book talks about the Specification and Verification of Distributed Shared Memory Relaxed Consistency Models specifically Weak Consistency Models. For this, an abstract Distributed Shared Memory (DSM) has been designed and implemented using CADP (Construction and Analysis of Distributed Processes). In DSM, sequential consistency unnecessarily reduces the performance of the system because it does not allow reordering or pipelining the memory operations. Weak consistency allows the reordering of memory events and buffering or pipelining of memory accesses so weak consistency improves the performance of the DSM system. For any critical system, it is very important to develop methods that increase our confidence in the correctness of such systems. One such method for the correctness of critical systems is formal verification. For verification of the weak consistency model, the properties of weak consistency have been specified and verified on the Abstract DSM System using CADP Toolbox.




Shared Memory Consistency Models


Book Description

The shared memory systems should support parallelization at the computation (multiprocessor), communication (Network-on-Chip, NoC) and memory architecture levels to exploit the potential performance benefits. Such systems are facing the critical issues of memory consistency and coherence. Memory consistency issue arises due to the unconstrained operations which sometimes lead to the unexpected behavior of the systems. Memory consistency models are used to resolve this issue. Relaxed or weaker consistency models enforce less ordering constraints on the memory operations and exploit system optimizations compared to the stricter models. This book discusses the novel realization schemes and scalability analysis of strict Sequential Consistency (SC) model and relaxed memory consistency models: Total Store Ordering (TSO), Partial Store Ordering (PSO), Weak Ordering (WO), Release Consistency (RC), and Protected Release Consistency (PRC) in the NoC based distributed shared memory multiprocessor systems. This study should help the average readers and professionals to understand the critical issue of memory consistency both in the NoC based systems and general purpose multiprocessor systems.




Consistent Distributed Storage


Book Description

Providing a shared memory abstraction in distributed systems is a powerful tool that can simplify the design and implementation of software systems for networked platforms. This enables the system designers to work with abstract readable and writable objects without the need to deal with the complexity and dynamism of the underlying platform. The key property of shared memory implementations is the consistency guarantee that it provides under concurrent access to the shared objects. The most intuitive memory consistency model is atomicity because of its equivalence with a memory system where accesses occur serially, one at a time. Emulations of shared atomic memory in distributed systems is an active area of research and development. The problem proves to be challenging, and especially so in distributed message passing settings with unreliable components, as is often the case in networked systems. We present several approaches to implementing shared memory services with the help of replication on top of message-passing distributed platforms subject to a variety of perturbations in the computing medium.




Modeling Sequential Consistency in a Distributed Shared Memory System


Book Description

The compact DSPN is employed for a quantitative performance analysis of the sequential consistency protocol. A detailed evaluation of the sequential consistency protocol is presented which give [sic] important hints for designing more relaxed consistency models."




Views and Consistencies in Distributed Shared Memory


Book Description

The distributed shared memory (DSM) abstraction is a very popular programming paradigm in parallel and distributed environments. However, DSM often suffers from performance problems as consistency requirements often incur long access latencies that cannot be overlapped with other operations in a process. Sequential consistency is the most general consistency requirement for DSM systems. This thesis explores two different avenues to solve the performance problem for DSM systems. First, for sequentially consistent DSM, we introduce a new strategy to minimize synchronization cost and maximize the hiding of synchronization delays in a process. The strategy is based on the knowledge of spatial locality in the sharing of memory objects. An access graph is used to capture the sharing relationship among processes via the shared objects. We show that if accesses in all cycles are 'properly' synchronized, then the execution is guaranteed to be sequentially consistent. We develop two distinct solution strategies to ensure proper synchronization. (i) Neighbor protocol: conflicting accesses between two neighbors in an access cycle must be synchronized, and (ii) flush protocol: asynchronous accesses in an access cycle must be eventually synchronized by a special flush-access in the cycle. Simulation experiments have shown significant improvements in performance in our protocols, especially in the case of the flush protocol. Another strategy to improve performance of DSM systems is to adopt a weaker consistency model so that blocking among some memory operations can be removed. In this thesis, we use the primitive notion of program-order and value-order to define global view. Using this as a seed, various consistency models evolve and form multiple hierarchies of models. The creation of these models and hierarchies comes via one of the following means: (i) a global view is augmented with additional ordering among its operations whenever some orderings exist, or (ii) besides linearizability of a global view, certain orderings must not co-exist in it. The former involves augmentation rules, and the latter involves causality requirements. The creation of these hierarchies leads to several novel consequences: the notion of exact implementation is introduced, new protocols are discovered and the precise analysis of access behaviors of an application is now possible.




Distributed Shared Memory and Data Consistency


Book Description

This work investigates the feasibility of data consistency models to be used in Distributed Shared Memory (DSM) for Wireless Sensor Networks (WSN) to enable more powerful distributed systems with reliable data exchange. It starts with an introduction of WSN and consistency related approaches. Based on these basics, mechanisms that enable data consistency are discussed as a theoretical framework for the prototypical implementation of a data consistency providing middleware that was implemented as part of this work. The proposed middleware adapts the mechanisms known from the original memory consistency approaches to make them usable in the sensor network area and also proposes own low cost mechanisms. The latter are at least partially based on the idea that within the shared memory in WSNs the information is the major concern and by that the replica update rates can be tailored to the application. In order to allow for ease of use of the middleware, the replication schemes and the consistency mechanisms are defined by the application engineer as a policy. The most appropriate memory consistency models were implemented and evaluated using the framework proposed in this work.




Comparative Study of Various Consistency Models in Distributed Shared Memory System


Book Description

Distributed Shared Memory (DSM) is a collection of nodes or clusters, each with its own memory connected by an interconnected network. The key issue in DSM is keeping the memory pages consistent. It refers to the degree of consistency that has to be maintained for the shared memory data. Maintaining perfect consistency is especially painful when the difference between the latency and/or throughput of memory accesses on the one hand, and the network connecting the machines on which these copies reside on the other, is big. The solution might be to accept less than perfect consistency as the price for better performance. This paper reviews various memory consistency models which are used in different DSM systems.







Distributed Shared Memory


Book Description

The papers present in this text survey both distributed shared memory (DSM) efforts and commercial DSM systems. The book discusses relevant issues that make the concept of DSM one of the most attractive approaches for building large-scale, high-performance multiprocessor systems. The authors provide a general introduction to the DSM field as well as a broad survey of the basic DSM concepts, mechanisms, design issues, and systems. The book concentrates on basic DSM algorithms, their enhancements, and their performance evaluation. In addition, it details implementations that employ DSM solutions at the software and the hardware level. This guide is a research and development reference that provides state-of-the art information that will be useful to architects, designers, and programmers of DSM systems.