Visualization of Time-Oriented Data


Book Description

This is an open access book. Time is an exceptional dimension with high relevance in medicine, engineering, business, science, biography, history, planning, or project management. Understanding time-oriented data via visual representations enables us to learn from the past in order to predict, plan, and build the future. This second edition builds upon the great success of the first edition. It maintains a brief introduction to visualization and a review of historical time-oriented visual representations. At its core, the book develops a systematic view of the visualization of time-oriented data. Separate chapters discuss interaction techniques and computational methods for supporting the visual data analysis. Many examples and figures illustrate the introduced concepts and techniques. So, what is new for the second edition? First of all, the second edition is now published as an open-access book so that anyone interested in the visualization of time and time-oriented data can read it. Second, the entire content has been revised and expanded to represent state-of-the-art knowledge. The chapter on interaction support now includes advanced methods for interacting with visual representations of time-oriented data. The second edition also covers the topics of data quality as well as segmentation and labeling. The comprehensive survey of classic and contemporary visualization techniques now provides more than 150 self-contained descriptions accompanied by illustrations and corresponding references. A completely new chapter describes how the structured survey can be used for the guided selection of suitable visualization techniques. For the second edition, our TimeViz Browser, the digital pendant to the survey of visualization techniques, received a major upgrade. It includes the same set of techniques as the book, but comes with additional filter and search facilities allowing scientists and practitioners to find exactly the solutions they are interested in.










Data Visualization


Book Description

Data visualization is currently a very active and vital area of research, teaching and development. The term unites the established field of scientific visualization and the more recent field of information visualization. The success of data visualization is due to the soundness of the basic idea behind it: the use of computer-generated images to gain insight and knowledge from data and its inherent patterns and relationships. A second premise is the utilization of the broad bandwidth of the human sensory system in steering and interpreting complex processes, and simulations involving data sets from diverse scientific disciplines and large collections of abstract data from many sources. These concepts are extremely important and have a profound and widespread impact on the methodology of computational science and engineering, as well as on management and administration. The interplay between various application areas and their specific problem solving visualization techniques is emphasized in this book. Reflecting the heterogeneous structure of Data Visualization, emphasis was placed on these topics: -Visualization Algorithms and Techniques; -Volume Visualization; -Information Visualization; -Multiresolution Techniques; -Interactive Data Exploration. Data Visualization: The State of the Art presents the state of the art in scientific and information visualization techniques by experts in this field. It can serve as an overview for the inquiring scientist, and as a basic foundation for developers. This edited volume contains chapters dedicated to surveys of specific topics, and a great deal of original work not previously published illustrated by examples from a wealth of applications. The book will also provide basic material for teaching the state of the art techniques in data visualization. Data Visualization: The State of the Art is designed to meet the needs of practitioners and researchers in scientific and information visualization. This book is also suitable as a secondary text for graduate level students in computer science and engineering.




High Performance Visualization


Book Description

Visualization and analysis tools, techniques, and algorithms have undergone a rapid evolution in recent decades to accommodate explosive growth in data size and complexity and to exploit emerging multi- and many-core computational platforms. High Performance Visualization: Enabling Extreme-Scale Scientific Insight focuses on the subset of scientifi




Foundations of Data Visualization


Book Description

This is the first book that focuses entirely on the fundamental questions in visualization. Unlike other existing books in the field, it contains discussions that go far beyond individual visual representations and individual visualization algorithms. It offers a collection of investigative discourses that probe these questions from different perspectives, including concepts that help frame these questions and their potential answers, mathematical methods that underpin the scientific reasoning of these questions, empirical methods that facilitate the validation and falsification of potential answers, and case studies that stimulate hypotheses about potential answers while providing practical evidence for such hypotheses. Readers are not instructed to follow a specific theory, but their attention is brought to a broad range of schools of thoughts and different ways of investigating fundamental questions. As such, the book represents the by now most significant collective effort for gathering a large collection of discourses on the foundation of data visualization. Data visualization is a relatively young scientific discipline. Over the last three decades, a large collection of computer-supported visualization techniques have been developed, and the merits and benefits of using these techniques have been evidenced by numerous applications in practice. These technical advancements have given rise to the scientific curiosity about some fundamental questions such as why and how visualization works, when it is useful or effective and when it is not, what are the primary factors affecting its usefulness and effectiveness, and so on. This book signifies timely and exciting opportunities to answer such fundamental questions by building on the wealth of knowledge and experience accumulated in developing and deploying visualization technology in practice.




Visualizing Time


Book Description

Art, or Science? Which of these is the right way to think of the field of visualization? This is not an easy question to answer, even for those who have many years experience in making graphical depictions of data with a view to help people understand it and take action. In this book, Graham Wills bridges the gap between the art and the science of visually representing data. He does not simply give rules and advice, but bases these on general principles and provide a clear path between them This book is concerned with the graphical representation of time data and is written to cover a range of different users. A visualization expert designing tools for displaying time will find it valuable, but so also should a financier assembling a report in a spreadsheet, or a medical researcher trying to display gene sequences using a commercial statistical package.




Distribution-based Summarization for Large Scale Simulation Data Visualization and Analysis


Book Description

The advent of high-performance supercomputers enables scientists to perform extreme-scale simulations that generate millions of cells and thousands of time steps. Through exploring and analyzing the simulation outputs, scientists can gain a deeper understanding of the modeled phenomena. When the size of simulation output is small, the common practice is to simply move the data to the machines that perform post analysis. However, as the size of data grows, the limited bandwidth and capacity of networking and storage devices that connect the supercomputers to the analysis machine become a major bottleneck. Therefore, visualizing and analyzing large-scale simulation datasets are posing significant challenges. This dissertation addresses the big data challenge and suggests distribution-based in-situ techniques. The technique uses the same supercomputer resources to analyze the raw data and generate compact data proxies which use distribution to statistically summarize the raw data. Only the compact data proxies are moved to the post-analysis machine to overcome the bottleneck. Because the distribution-based data representation keeps the statistical data properties, it has the potential to facilitate flexible post-hoc data analysis and enable uncertainty quantification. We firstly focus on the problem of large data volume rendering on resource-limited post analysis machines. To tackle the limited I/O bandwidth and storage space challenge, distributions are used to summarize the data. When visualizing the data, importance sampling is proposed to draw a small number of samples and minimize the demand of computational power. The error of the proxies is quantified and visually presented to scientists by uncertainty animation. We also tackle the problem of error reduction when approximating the spatial information in distribution-based representations. The error could cause low visualization quality and hinder the data exploration. The basic distribution-based approach is augmented by our proposed spatial distribution which is represented by a three-dimensional Gaussian Mixture Model (GMM). The new representation not only improves the visualization quality but can also be used in various visualization techniques, such as volume rendering, uncertain isosurface, and salient feature exploration. Then, a technique is developed to tackle the problem of large-scale time-varying datasets. This representation stores the time-varying datasets with a lower temporal resolution and utilizes the temporal coherence to reconstruct the data at non-sampled time steps. Each pixel ray at a view at non-sampled time step is decoupled into a value distribution and samples' location information. Our representation utilizes the data coherence to recover the samples' location information and store less data. In addition, similar value distributions from multiple rays are represented by one distribution to save more storage. Finally, a statistical-based super resolution technique is proposed to solve the big data problem caused by a huge parameter space. Simulation runs with a few parameter samples output full resolution data which is used to create the prior knowledge. Data from rest of simulation runs in the parameter space is statistically down-sampled to compact representation in situ to reduce the data size. These compact data representation can be reconstructed to high resolution by combining with the prior knowledge for data analysis.




Data Visualization


Book Description

Designing a complete visualization system involves many subtle decisions. When designing a complex, real-world visualization system, such decisions involve many types of constraints, such as performance, platform (in)dependence, available programming languages and styles, user-interface toolkits, input/output data format constraints, integration with third-party code, and more. Focusing on those techniques and methods with the broadest applicability across fields, the second edition of Data Visualization: Principles and Practice provides a streamlined introduction to various visualization techniques. The book illustrates a wide variety of applications of data visualizations, illustrating the range of problems that can be tackled by such methods, and emphasizes the strong connections between visualization and related disciplines such as imaging and computer graphics. It covers a wide range of sub-topics in data visualization: data representation; visualization of scalar, vector, tensor, and volumetric data; image processing and domain modeling techniques; and information visualization. See What’s New in the Second Edition: Additional visualization algorithms and techniques New examples of combined techniques for diffusion tensor imaging (DTI) visualization, illustrative fiber track rendering, and fiber bundling techniques Additional techniques for point-cloud reconstruction Additional advanced image segmentation algorithms Several important software systems and libraries Algorithmic and software design issues are illustrated throughout by (pseudo)code fragments written in the C++ programming language. Exercises covering the topics discussed in the book, as well as datasets and source code, are also provided as additional online resources.