High Performance Visualization


Book Description

Visualization and analysis tools, techniques, and algorithms have undergone a rapid evolution in recent decades to accommodate explosive growth in data size and complexity and to exploit emerging multi- and many-core computational platforms. High Performance Visualization: Enabling Extreme-Scale Scientific Insight focuses on the subset of scientifi




Distribution-based Summarization for Large Scale Simulation Data Visualization and Analysis


Book Description

The advent of high-performance supercomputers enables scientists to perform extreme-scale simulations that generate millions of cells and thousands of time steps. Through exploring and analyzing the simulation outputs, scientists can gain a deeper understanding of the modeled phenomena. When the size of simulation output is small, the common practice is to simply move the data to the machines that perform post analysis. However, as the size of data grows, the limited bandwidth and capacity of networking and storage devices that connect the supercomputers to the analysis machine become a major bottleneck. Therefore, visualizing and analyzing large-scale simulation datasets are posing significant challenges. This dissertation addresses the big data challenge and suggests distribution-based in-situ techniques. The technique uses the same supercomputer resources to analyze the raw data and generate compact data proxies which use distribution to statistically summarize the raw data. Only the compact data proxies are moved to the post-analysis machine to overcome the bottleneck. Because the distribution-based data representation keeps the statistical data properties, it has the potential to facilitate flexible post-hoc data analysis and enable uncertainty quantification. We firstly focus on the problem of large data volume rendering on resource-limited post analysis machines. To tackle the limited I/O bandwidth and storage space challenge, distributions are used to summarize the data. When visualizing the data, importance sampling is proposed to draw a small number of samples and minimize the demand of computational power. The error of the proxies is quantified and visually presented to scientists by uncertainty animation. We also tackle the problem of error reduction when approximating the spatial information in distribution-based representations. The error could cause low visualization quality and hinder the data exploration. The basic distribution-based approach is augmented by our proposed spatial distribution which is represented by a three-dimensional Gaussian Mixture Model (GMM). The new representation not only improves the visualization quality but can also be used in various visualization techniques, such as volume rendering, uncertain isosurface, and salient feature exploration. Then, a technique is developed to tackle the problem of large-scale time-varying datasets. This representation stores the time-varying datasets with a lower temporal resolution and utilizes the temporal coherence to reconstruct the data at non-sampled time steps. Each pixel ray at a view at non-sampled time step is decoupled into a value distribution and samples' location information. Our representation utilizes the data coherence to recover the samples' location information and store less data. In addition, similar value distributions from multiple rays are represented by one distribution to save more storage. Finally, a statistical-based super resolution technique is proposed to solve the big data problem caused by a huge parameter space. Simulation runs with a few parameter samples output full resolution data which is used to create the prior knowledge. Data from rest of simulation runs in the parameter space is statistically down-sampled to compact representation in situ to reduce the data size. These compact data representation can be reconstructed to high resolution by combining with the prior knowledge for data analysis.










High-Performance Computing


Book Description

This book constitutes the refereed joint post-conference proceedings of the 6th International Symposium on High-Performance Computing, ISHPC 2005, held in, Japan, in 2005. It also includes the refereed post-proceedings of the First International Workshop on Advanced Low Power Systems 2006, ALPS2006, and some from the Workshop on Applications for PetaFLOPS Computing, APC 2005. A total of 42 papers were carefully selected from 76 submissions, covering a huge range of topics.




Data Summarization for Large Time-varying Flow Visualization and Analysis


Book Description

The rapid growth of computing power has expedited scientific simulations which can now generate data in unprecedentedly high quality and quantity. However, this advancement has not been mirrored in I/O performance, and hence scientific research is facing great challenges in visualizing and analyzing large-scale simulation results. Among areas of scientific research, fluid flow analysis plays an important role in many disciplines such as aerospace, climate modeling and medicine applications. The data-intensive computation required for fluid flow visualization makes it difficult to devise efficient algorithms and frameworks for flow analysis. First, to analyze a time-varying flow field, pathline visualization is typically used to reveal particle trajectories in the flow. Pathline computation, however, has irregular data access pattern that complicates out-of-core computation when the flow data are too large to fit in the main memory. Strategies on modeling the access pattern and improving spatial and temporal data locality are needed. Second, to avoid tremendous I/O latency, the simulated flow field results are typically down-sampled when they are stored, which inevitably affects the accuracy of the derived pathlines. Error reduction and modeling becomes important to enable uncertainty visualization in order for better decision making. This dissertation addresses the above challenges by data summarization approaches that efficiently process large data into succinct representations to facilitate flow analysis and visualization. First, a graph modeling approach is employed to encode the data access pattern of pathline computation, with which a cache-oblivious file layout algorithm and a work scheduling algorithm are proposed to optimize disk caching during out-of-core pathline visualization. Second, an incremental algorithm is devised that fits streaming time series of flow fields into higher-order polynomials and estimates errors in a compact distribution model. The benefit of this distribution-based error modeling is demonstrated to enable probabilistic uncertain pathline computation. Finally, a case study of jet engine stall is conducted for large flow simulations. Vortex analysis and various anomaly detection methods are proposed to capture flow instability that may lead to stall. Comparative visualization techniques are then employed to reveal and contrast temporal patterns from the detection results. Positive expert feedback shows the effectiveness and potential of the proposed methods for stall analysis in large-scale flow simulations.




Visualization of Large Scale Volumetric Datasets


Book Description

In this thesis, we address the problem of large-scale data visualization from two aspects, dimensionality and resolution. We introduce a novel data structure called Differential Time- Histogram Table (DTHT) for visualization of time-varying (4D) scalar data. The proposed data structure takes advantage of the coherence in time-varying datasets and allows efficient updates of data necessary for rendering during data exploration and visualization while guaranteeing that the scalar field visualized is within a given error tolerance of the scalar field sampled. To address the high-resolution datasets, we propose a hierarchical data structure and introduce a novel hybrid framework to improve the quality of multi-resolution visualization. For more accurate rendering at coarser levels of detail, we reduce aliasing artifacts by approximating data distribution with a Gaussian basis at each level of detail and we reduce blurring by using transparent isosurfaces to capture high-frequency features usually missed in coarse resolution renderings.




Visualization Handbook


Book Description

The Visualization Handbook provides an overview of the field of visualization by presenting the basic concepts, providing a snapshot of current visualization software systems, and examining research topics that are advancing the field. This text is intended for a broad audience, including not only the visualization expert seeking advanced methods to solve a particular problem, but also the novice looking for general background information on visualization topics. The largest collection of state-of-the-art visualization research yet gathered in a single volume, this book includes articles by a “who’s who of international scientific visualization researchers covering every aspect of the discipline, including:· Virtual environments for visualization· Basic visualization algorithms· Large-scale data visualization· Scalar data isosurface methods· Visualization software and frameworks· Scalar data volume rendering· Perceptual issues in visualization· Various application topics, including information visualization. * Edited by two of the best known people in the world on the subject; chapter authors are authoritative experts in their own fields;* Covers a wide range of topics, in 47 chapters, representing the state-of-the-art of scientific visualization.




Data Visualization


Book Description

Data visualization is currently a very active and vital area of research, teaching and development. The term unites the established field of scientific visualization and the more recent field of information visualization. The success of data visualization is due to the soundness of the basic idea behind it: the use of computer-generated images to gain insight and knowledge from data and its inherent patterns and relationships. A second premise is the utilization of the broad bandwidth of the human sensory system in steering and interpreting complex processes, and simulations involving data sets from diverse scientific disciplines and large collections of abstract data from many sources. These concepts are extremely important and have a profound and widespread impact on the methodology of computational science and engineering, as well as on management and administration. The interplay between various application areas and their specific problem solving visualization techniques is emphasized in this book. Reflecting the heterogeneous structure of Data Visualization, emphasis was placed on these topics: -Visualization Algorithms and Techniques; -Volume Visualization; -Information Visualization; -Multiresolution Techniques; -Interactive Data Exploration. Data Visualization: The State of the Art presents the state of the art in scientific and information visualization techniques by experts in this field. It can serve as an overview for the inquiring scientist, and as a basic foundation for developers. This edited volume contains chapters dedicated to surveys of specific topics, and a great deal of original work not previously published illustrated by examples from a wealth of applications. The book will also provide basic material for teaching the state of the art techniques in data visualization. Data Visualization: The State of the Art is designed to meet the needs of practitioners and researchers in scientific and information visualization. This book is also suitable as a secondary text for graduate level students in computer science and engineering.