System-Level Analysis and Design under Uncertainty


Book Description

One major problem for the designer of electronic systems is the presence of uncertainty, which is due to phenomena such as process and workload variation. Very often, uncertainty is inherent and inevitable. If ignored, it can lead to degradation of the quality of service in the best case and to severe faults or burnt silicon in the worst case. Thus, it is crucial to analyze uncertainty and to mitigate its damaging consequences by designing electronic systems in such a way that they effectively and efficiently take uncertainty into account. We begin by considering techniques for deterministic system-level analysis and design of certain aspects of electronic systems. These techniques do not take uncertainty into account, but they serve as a solid foundation for those that do. Our attention revolves primarily around power and temperature, as they are of central importance for attaining robustness and energy efficiency. We develop a novel approach to dynamic steady-state temperature analysis of electronic systems and apply it in the context of reliability optimization. We then proceed to develop techniques that address uncertainty. The first technique is designed to quantify the variability of process parameters, which is induced by process variation, across silicon wafers based on indirect and potentially incomplete and noisy measurements. The second technique is designed to study diverse system-level characteristics with respect to the variability originating from process variation. In particular, it allows for analyzing transient temperature profiles as well as dynamic steady-state temperature profiles of electronic systems. This is illustrated by considering a problem of design-space exploration with probabilistic constraints related to reliability. The third technique that we develop is designed to efficiently tackle the case of sources of uncertainty that are less regular than process variation, such as workload variation. This technique is exemplified by analyzing the effect that workload units with uncertain processing times have on the timing-, power-, and temperature-related characteristics of the system under consideration. We also address the issue of runtime management of electronic systems that are subject to uncertainty. In this context, we perform an early investigation of the utility of advanced prediction techniques for the purpose of finegrained long-range forecasting of resource usage in large computer systems. All the proposed techniques are assessed by extensive experimental evaluations, which demonstrate the superior performance of our approaches to analysis and design of electronic systems compared to existing techniques.




Uncertainty Modeling in Finite Element, Fatigue and Stability of Systems


Book Description

The functionality of modern structural, mechanical and electrical or electronic systems depends on their ability to perform under uncertain conditions. Consideration of uncertainties and their effect on system behavior is an essential and integral part of defining systems. In eleven chapters, leading experts present an overview of the current state of uncertainty modeling, analysis and design of large systems in four major areas: finite and boundary element methods (common structural analysis techniques), fatigue, stability analysis, and fault-tolerant systems. The content of this book is unique; it describes exciting research developments and challenges in emerging areas, and provide a sophisticated toolbox for tackling uncertainty modeling in real systems.




Dependability Modelling under Uncertainty


Book Description

Mechatronic design processes have become shorter and more parallelized, induced by growing time-to-market pressure. Methods that enable quantitative analysis in early design stages are required, should dependability analyses aim to influence the design. Due to the limited amount of data in this phase, the level of uncertainty is high and explicit modeling of these uncertainties becomes necessary. This work introduces new uncertainty-preserving dependability methods for early design stages. These include the propagation of uncertainty through dependability models, the activation of data from similar components for analyses and the integration of uncertain dependability predictions into an optimization framework. It is shown that Dempster-Shafer theory can be an alternative to probability theory in early design stage dependability predictions. Expert estimates can be represented, input uncertainty is propagated through the system and prediction uncertainty can be measured and interpreted. The resulting coherent methodology can be applied to represent the uncertainty in dependability models.




System-Level Design of GPU-Based Embedded Systems


Book Description

Modern embedded systems deploy several hardware accelerators, in a heterogeneous manner, to deliver high-performance computing. Among such devices, graphics processing units (GPUs) have earned a prominent position by virtue of their immense computing power. However, a system design that relies on sheer throughput of GPUs is often incapable of satisfying the strict power- and time-related constraints faced by the embedded systems. This thesis presents several system-level software techniques to optimize the design of GPU-based embedded systems under various graphics and non-graphics applications. As compared to the conventional application-level optimizations, the system-wide view of our proposed techniques brings about several advantages: First, it allows for fully incorporating the limitations and requirements of the various system parts in the design process. Second, it can unveil optimization opportunities through exposing the information flow between the processing components. Third, the techniques are generally applicable to a wide range of applications with similar characteristics. In addition, multiple system-level techniques can be combined together or with application-level techniques to further improve the performance. We begin by studying some of the unique attributes of GPU-based embedded systems and discussing several factors that distinguish the design of these systems from that of the conventional high-end GPU-based systems. We then proceed to develop two techniques that address an important challenge in the design of GPU-based embedded systems from different perspectives. The challenge arises from the fact that GPUs require a large amount of workload to be present at runtime in order to deliver a high throughput. However, for some embedded applications, collecting large batches of input data requires an unacceptable waiting time, prompting a trade-off between throughput and latency. We also develop an optimization technique for GPU-based applications to address the memory bottleneck issue by utilizing the GPU L2 cache to shorten data access time. Moreover, in the area of graphics applications, and in particular with a focus on mobile games, we propose a power management scheme to reduce the GPU power consumption by dynamically adjusting the display resolution, while considering the user's visual perception at various resolutions. We also discuss the collective impact of the proposed techniques in tackling the design challenges of emerging complex systems. The proposed techniques are assessed by real-life experimentations on GPU-based hardware platforms, which demonstrate the superior performance of our approaches as compared to the state-of-the-art techniques.




Robust Stream Reasoning Under Uncertainty


Book Description

Vast amounts of data are continually being generated by a wide variety of data producers. This data ranges from quantitative sensor observations produced by robot systems to complex unstructured human-generated texts on social media. With data being so abundant, the ability to make sense of these streams of data through reasoning is of great importance. Reasoning over streams is particularly relevant for autonomous robotic systems that operate in physical environments. They commonly observe this environment through incremental observations, gradually refining information about their surroundings. This makes robust management of streaming data and their refinement an important problem. Many contemporary approaches to stream reasoning focus on the issue of querying data streams in order to generate higher-level information by relying on well-known database approaches. Other approaches apply logic-based reasoning techniques, which rarely consider the provenance of their symbolic interpretations. In this work, we integrate techniques for logic-based stream reasoning with the adaptive generation of the state streams needed to do the reasoning over. This combination deals with both the challenge of reasoning over uncertain streaming data and the problem of robustly managing streaming data and their refinement. The main contributions of this work are (1) a logic-based temporal reasoning technique based on path checking under uncertainty that combines temporal reasoning with qualitative spatial reasoning; (2) an adaptive reconfiguration procedure for generating and maintaining a data stream required to perform spatio-temporal stream reasoning over; and (3) integration of these two techniques into a stream reasoning framework. The proposed spatio-temporal stream reasoning technique is able to reason with intertemporal spatial relations by leveraging landmarks. Adaptive state stream generation allows the framework to adapt to situations in which the set of available streaming resources changes. Management of streaming resources is formalised in the DyKnow model, which introduces a configuration life-cycle to adaptively generate state streams. The DyKnow-ROS stream reasoning framework is a concrete realisation of this model that extends the Robot Operating System (ROS). DyKnow-ROS has been deployed on the SoftBank Robotics NAO platform to demonstrate the system's capabilities in a case study on run-time adaptive reconfiguration. The results show that the proposed system - by combining reasoning over and reasoning about streams - can robustly perform stream reasoning, even when the availability of streaming resources changes.




Dependable Embedded Systems


Book Description

This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems.




Fostering User Involvement in Ontology Alignment and Alignment Evaluation


Book Description

The abundance of data at our disposal empowers data-driven applications and decision making. The knowledge captured in the data, however, has not been utilized to full potential, as it is only accessible to human interpretation and data are distributed in heterogeneous repositories. Ontologies are a key technology unlocking the knowledge in the data by providing means to model the world around us and infer knowledge implicitly captured in the data. As data are hosted by independent organizations we often need to use several ontologies and discover the relationships between them in order to support data and knowledge transfer. Broadly speaking, while ontologies provide formal representations and thus the basis, ontology alignment supplies integration techniques and thus the means to turn the data kept in distributed, heterogeneous repositories into valuable knowledge. While many automatic approaches for creating alignments have already been developed, user input is still required for obtaining the highest-quality alignments. This thesis focuses on supporting users during the cognitively intensive alignment process and makes several contributions. We have identified front- and back-end system features that foster user involvement during the alignment process and have investigated their support in existing systems by user interface evaluations and literature studies. We have further narrowed down our investigation to features in connection to the, arguably, most cognitively demanding task from the users’ perspective—manual validation—and have also considered the level of user expertise by assessing the impact of user errors on alignments’ quality. As developing and aligning ontologies is an error-prone task, we have focused on the benefits of the integration of ontology alignment and debugging. We have enabled interactive comparative exploration and evaluation of multiple alignments at different levels of detail by developing a dedicated visual environment—Alignment Cubes—which allows for alignments’ evaluation even in the absence of reference alignments. Inspired by the latest technological advances we have investigated and identified three promising directions for the application of large, high-resolution displays in the field: improving the navigation in the ontologies and their alignments, supporting reasoning and collaboration between users.




Aerospace System Analysis and Optimization in Uncertainty


Book Description

Spotlighting the field of Multidisciplinary Design Optimization (MDO), this book illustrates and implements state-of-the-art methodologies within the complex process of aerospace system design under uncertainties. The book provides approaches to integrating a multitude of components and constraints with the ultimate goal of reducing design cycles. Insights on a vast assortment of problems are provided, including discipline modeling, sensitivity analysis, uncertainty propagation, reliability analysis, and global multidisciplinary optimization. The extensive range of topics covered include areas of current open research. This Work is destined to become a fundamental reference for aerospace systems engineers, researchers, as well as for practitioners and engineers working in areas of optimization and uncertainty. Part I is largely comprised of fundamentals. Part II presents methodologies for single discipline problems with a review of existing uncertainty propagation, reliability analysis, and optimization techniques. Part III is dedicated to the uncertainty-based MDO and related issues. Part IV deals with three MDO related issues: the multifidelity, the multi-objective optimization and the mixed continuous/discrete optimization and Part V is devoted to test cases for aerospace vehicle design.




Design, Analysis and Test of Logic Circuits Under Uncertainty


Book Description

Logic circuits are becoming increasingly susceptible to probabilistic behavior caused by external radiation and process variation. In addition, inherently probabilistic quantum- and nano-technologies are on the horizon as we approach the limits of CMOS scaling. Ensuring the reliability of such circuits despite the probabilistic behavior is a key challenge in IC design---one that necessitates a fundamental, probabilistic reformulation of synthesis and testing techniques. This monograph will present techniques for analyzing, designing, and testing logic circuits with probabilistic behavior.




Studying Simulations with Distributed Cognition


Book Description

Simulations are frequently used techniques for training, performance assessment, and prediction of future outcomes. In this thesis, the term “human-centered simulation” is used to refer to any simulation in which humans and human cognition are integral to the simulation’s function and purpose (e.g., simulation-based training). A general problem for human-centered simulations is to capture the cognitive processes and activities of the target situation (i.e., the real world task) and recreate them accurately in the simulation. The prevalent view within the simulation research community is that cognition is internal, decontextualized computational processes of individuals. However, contemporary theories of cognition emphasize the importance of the external environment, use of tools, as well as social and cultural factors in cognitive practice. Consequently, there is a need for research on how such contemporary perspectives can be used to describe human-centered simulations, re-interpret theoretical constructs of such simulations, and direct how simulations should be modeled, designed, and evaluated. This thesis adopts distributed cognition as a framework for studying human-centered simulations. Training and assessment of emergency medical management in a Swedish context using the Emergo Train System (ETS) simulator was adopted as a case study. ETS simulations were studied and analyzed using the distributed cognition for teamwork (DiCoT) methodology with the goal of understanding, evaluating, and testing the validity of the ETS simulator. Moreover, to explore distributed cognition as a basis for simulator design, a digital re-design of ETS (DIGEMERGO) was developed based on the DiCoT analysis. The aim of the DIGEMERGO system was to retain core distributed cognitive features of ETS, to increase validity, outcome reliability, and to provide a digital platform for emergency medical studies. DIGEMERGO was evaluated in three separate studies; first, a usefulness, usability, and facevalidation study that involved subject-matter-experts; second, a comparative validation study using an expert-novice group comparison; and finally, a transfer of training study based on self-efficacy and management performance. Overall, the results showed that DIGEMERGO was perceived as a useful, immersive, and promising simulator – with mixed evidence for validity – that demonstrated increased general self-efficacy and management performance following simulation exercises. This thesis demonstrates that distributed cognition, using DiCoT, is a useful framework for understanding, designing and evaluating simulated environments. In addition, the thesis conceptualizes and re-interprets central constructs of human-centered simulation in terms of distributed cognition. In doing so, the thesis shows how distributed cognitive processes relate to validity, fidelity, functionality, and usefulness of human-centered simulations. This thesis thus provides a new understanding of human-centered simulations that is grounded in distributed cognition theory.