Despeckle Filtering for Ultrasound Imaging and Video, Volume II


Book Description

In ultrasound imaging and video visual perception is hindered by speckle multiplicative noise that degrades the quality. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image/video segmentation, texture analysis and encoding in ultrasound imaging and video. The goal of the first book (book 1 of 2 books) was to introduce the problem of speckle in ultrasound image and video as well as the theoretical background, algorithmic steps, and the MatlabTM for the following group of despeckle filters: linear despeckle filtering, non-linear despeckle filtering, diffusion despeckle filtering, and wavelet despeckle filtering. The goal of this book (book 2 of 2 books) is to demonstrate the use of a comparative evaluation framework based on these despeckle filters (introduced on book 1) on cardiovascular ultrasound image and video processing and analysis. More specifically, the despeckle filtering evaluation framework is based on texture analysis, image quality evaluation metrics, and visual evaluation by experts. This framework is applied in cardiovascular ultrasound image/video processing on the tasks of segmentation and structural measurements, texture analysis for differentiating between two classes (i.e. normal vs disease) and for efficient encoding for mobile applications. It is shown that despeckle noise reduction improved segmentation and measurement (of tissue structure investigated), increased the texture feature distance between normal and abnormal tissue, improved image/video quality evaluation and perception and produced significantly lower bitrates in video encoding. Furthermore, in order to facilitate further applications we have developed in MATLABTM two different toolboxes that integrate image (IDF) and video (VDF) despeckle filtering, texture analysis, and image and video quality evaluation metrics. The code for these toolsets is open source and these are available to download complementary to the two monographs.




Despeckle Filtering for Ultrasound Imaging and Video, Volume II


Book Description

In ultrasound imaging and video visual perception is hindered by speckle multiplicative noise that degrades the quality. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image/video segmentation, texture analysis and encoding in ultrasound imaging and video. The goal of the first book (book 1 of 2 books) was to introduce the problem of speckle in ultrasound image and video as well as the theoretical background, algorithmic steps, and the MatlabTM for the following group of despeckle filters: linear despeckle filtering, non-linear despeckle filtering, diffusion despeckle filtering, and wavelet despeckle filtering. The goal of this book (book 2 of 2 books) is to demonstrate the use of a comparative evaluation framework based on these despeckle filters (introduced on book 1) on cardiovascular ultrasound image and video processing and analysis. More specifically, the despeckle filtering evaluation framework is based on texture analysis, image quality evaluation metrics, and visual evaluation by experts. This framework is applied in cardiovascular ultrasound image/video processing on the tasks of segmentation and structural measurements, texture analysis for differentiating between two classes (i.e. normal vs disease) and for efficient encoding for mobile applications. It is shown that despeckle noise reduction improved segmentation and measurement (of tissue structure investigated), increased the texture feature distance between normal and abnormal tissue, improved image/video quality evaluation and perception and produced significantly lower bitrates in video encoding. Furthermore, in order to facilitate further applications we have developed in MATLABTM two different toolboxes that integrate image (IDF) and video (VDF) despeckle filtering, texture analysis, and image and video quality evaluation metrics. The code for these toolsets is open source and these are available to download complementary to the two monographs.




Despeckle Filtering for Ultrasound Imaging and Video, Volume II: Selected Applications, Second Edition


Book Description

In ultrasound imaging and video visual perception is hindered by speckle multiplicative noise that degrades the quality. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image/video segmentation, texture analysis and encoding in ultrasound imaging and video. The goal of the first book (book 1 of 2 books) was to introduce the problem of speckle in ultrasound image and video as well as the theoretical background, algorithmic steps, and the MatlabTM for the following group of despeckle filters: linear despeckle filtering, non-linear despeckle filtering, diffusion despeckle filtering, and wavelet despeckle filtering. The goal of this book (book 2 of 2 books) is to demonstrate the use of a comparative evaluation framework based on these despeckle filters (introduced on book 1) on cardiovascular ultrasound image and video processing and analysis. More specifically, the despeckle filtering evaluation framework is based on texture analysis, image quality evaluation metrics, and visual evaluation by experts. This framework is applied in cardiovascular ultrasound image/video processing on the tasks of segmentation and structural measurements, texture analysis for differentiating between two classes (i.e. normal vs disease) and for efficient encoding for mobile applications. It is shown that despeckle noise reduction improved segmentation and measurement (of tissue structure investigated), increased the texture feature distance between normal and abnormal tissue, improved image/video quality evaluation and perception and produced significantly lower bitrates in video encoding. Furthermore, in order to facilitate further applications we have developed in MATLABTM two different toolboxes that integrate image (IDF) and video (VDF) despeckle filtering, texture analysis, and image and video quality evaluation metrics. The code for these toolsets is open source and these are available to download complementary to the two monographs.




Despeckle Filtering for Ultrasound Imaging and Video


Book Description

In ultrasound imaging and video visual perception is hindered by speckle multiplicative noise that degrades the quality. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image/video segmentation, texture analysis and encoding in ultrasound imaging and video. The goal of the first book (book 1 of 2 books) was to introduce the problem of speckle in ultrasound image and video as well as the theoretical background, algorithmic steps, and the MatlabTM for the following group of despeckle filters: linear despeckle filtering, non-linear despeckle filtering, diffusion despeckle filtering, and wavelet despeckle filtering. The goal of this book (book 2 of 2 books) is to demonstrate the use of a comparative evaluation framework based on these despeckle filters (introduced on book 1) on cardiovascular ultrasound image and video processing and analysis. More specifically, the despeckle filtering evaluation framework is based on texture analysis, image quality evaluation metrics, and visual evaluation by experts. This framework is applied in cardiovascular ultrasound image/video processing on the tasks of segmentation and structural measurements, texture analysis for differentiating between two classes (i.e. normal vs disease) and for efficient encoding for mobile applications. It is shown that despeckle noise reduction improved segmentation and measurement (of tissue structure investigated), increased the texture feature distance between normal and abnormal tissue, improved image/video quality evaluation and perception and produced significantly lower bitrates in video encoding. Furthermore, in order to facilitate further applications we have developed in MATLABTM two different toolboxes that integrate image (IDF) and video (VDF) despeckle filtering, texture analysis, and image and video quality evaluation metrics. The code for these toolsets is open source and these are available to download complementary to the two monographs.




Virtual Design of an Audio Lifelogging System


Book Description

The availability of inexpensive, custom, highly integrated circuits is enabling some very powerful systems that bring together sensors, smart phones, wearables, cloud computing, and other technologies. To design these types of complex systems we are advocating a top-down simulation methodology to identify problems early. This approach enables software development to start prior to expensive chip and hardware development. We call the overall approach virtual design. This book explains why simulation has become important for chip design and provides an introduction to some of the simulation methods used. The audio lifelogging research project demonstrates the virtual design process in practice. The goals of this book are to: explain how silicon design has become more closely involved with system design; show how virtual design enables top down design; explain the utility of simulation at different abstraction levels; show how open source simulation software was used in audio lifelogging. The target audience for this book are faculty, engineers, and students who are interested in developing digital devices for Internet of Things (IoT) types of products.




Despeckle Filtering for Ultrasound Imaging and Video, Volume I


Book Description

It is well known that speckle is a multiplicative noise that degrades image and video quality and the visual expert's evaluation in ultrasound imaging and video. This necessitates the need for robust despeckling image and video techniques for both routine clinical practice and tele-consultation. The goal for this book (book 1 of 2 books) is to introduce the problem of speckle occurring in ultrasound image and video as well as the theoretical background (equations), the algorithmic steps, and the MATLABTM code for the following group of despeckle filters: linear filtering, nonlinear filtering, anisotropic diffusion filtering, and wavelet filtering. This book proposes a comparative evaluation framework of these despeckle filters based on texture analysis, image quality evaluation metrics, and visual evaluation by medical experts. Despeckle noise reduction through the application of these filters will improve the visual observation quality or it may be used as a pre-processing step for further automated analysis, such as image and video segmentation, and texture characterization in ultrasound cardiovascular imaging, as well as in bandwidth reduction in ultrasound video transmission for telemedicine applications. The aforementioned topics will be covered in detail in the companion book to this one. Furthermore, in order to facilitate further applications we have developed in MATLABTM two different toolboxes that integrate image (IDF) and video (VDF) despeckle filtering, texture analysis, and image and video quality evaluation metrics. The code for these toolsets is open source and these are available to download complementary to the two books. Table of Contents: Preface / Acknowledgments / List of Symbols / List of Abbreviations / Introduction to Speckle Noise in Ultrasound Imaging and Video / Basics of Evaluation Methodology / Linear Despeckle Filtering / Nonlinear Despeckle Filtering / Diffusion Despeckle Filtering / Wavelet Despeckle Filtering / Evaluation of Despeckle Filtering / Summary and Future Directions / References / Authors' Biographies




Sensor Analysis for the Internet of Things


Book Description

While it may be attractive to view sensors as simple transducers which convert physical quantities into electrical signals, the truth of the matter is more complex. The engineer should have a proper understanding of the physics involved in the conversion process, including interactions with other measurable quantities. A deep understanding of these interactions can be leveraged to apply sensor fusion techniques to minimize noise and/or extract additional information from sensor signals. Advances in microcontroller and MEMS manufacturing, along with improved internet connectivity, have enabled cost-effective wearable and Internet of Things sensor applications. At the same time, machine learning techniques have gone mainstream, so that those same applications can now be more intelligent than ever before. This book explores these topics in the context of a small set of sensor types. We provide some basic understanding of sensor operation for accelerometers, magnetometers, gyroscopes, and pressure sensors. We show how information from these can be fused to provide estimates of orientation. Then we explore the topics of machine learning and sensor data analytics.




Secure Sensor Cloud


Book Description

The sensor cloud is a new model of computing paradigm for Wireless Sensor Networks (WSNs), which facilitates resource sharing and provides a platform to integrate different sensor networks where multiple users can build their own sensing applications at the same time. It enables a multi-user on-demand sensory system, where computing, sensing, and wireless network resources are shared among applications. Therefore, it has inherent challenges for providing security and privacy across the sensor cloud infrastructure. With the integration of WSNs with different ownerships, and users running a variety of applications including their own code, there is a need for a risk assessment mechanism to estimate the likelihood and impact of attacks on the life of the network. The data being generated by the wireless sensors in a sensor cloud need to be protected against adversaries, which may be outsiders as well as insiders. Similarly, the code disseminated to the sensors within the sensor cloud needs to be protected against inside and outside adversaries. Moreover, since the wireless sensors cannot support complex and energy-intensive measures, the lightweight schemes for integrity, security, and privacy of the data have to be redesigned. The book starts with the motivation and architecture discussion of a sensor cloud. Due to the integration of multiple WSNs running user-owned applications and code, the possibility of attacks is more likely. Thus, next, we discuss a risk assessment mechanism to estimate the likelihood and impact of attacks on these WSNs in a sensor cloud using a framework that allows the security administrator to better understand the threats present and take necessary actions. Then, we discuss integrity and privacy preserving data aggregation in a sensor cloud as it becomes harder to protect data in this environment. Integrity of data can be compromised as it becomes easier for an attacker to inject false data in a sensor cloud, and due to hop by hop nature, privacy of data could be leaked as well. Next, the book discusses a fine-grained access control scheme which works on the secure aggregated data in a sensor cloud. This scheme uses Attribute Based Encryption (ABE) to achieve the objective. Furthermore, to securely and efficiently disseminate application code in sensor cloud, we present a secure code dissemination algorithm which first reduces the amount of code to be transmitted from the base station to the sensor nodes. It then uses Symmetric Proxy Re-encryption along with Bloom filters and Hash-based Message Authentication Code (HMACs) to protect the code against eavesdropping and false code injection attacks.




Cognitive Fusion for Target Tracking


Book Description

The adaptive configuration of nodes in a sensor network has the potential to improve sequential estimation performance by intelligently allocating limited sensor network resources. In addition, the use of heterogeneous sensing nodes provides a diversity of information that also enhances estimation performance. This work reviews cognitive systems and presents a cognitive fusion framework for sequential state estimation using adaptive configuration of heterogeneous sensing nodes and heterogeneous data fusion. This work also provides an application of cognitive fusion to the sequential estimation problem of target tracking using foveal and radar sensors.




A Survey of Blur Detection and Sharpness Assessment Methods


Book Description

Blurring is almost an omnipresent effect on natural images. The main causes of blurring in images include: (a) the existence of objects at different depths within the scene which is known as defocus blur; (b) blurring due to motion either of objects in the scene or the imaging device; and (c) blurring due to atmospheric turbulence. Automatic estimation of spatially varying sharpness/blurriness has several applications including depth estimation, image quality assessment, information retrieval, image restoration, among others. There are some cases in which blur is intentionally introduced or enhanced; for example, in artistic photography and cinematography in which blur is intentionally introduced to emphasize a certain image region. Bokeh is a technique that introduces defocus blur with aesthetic purposes. Additionally, in trending applications like augmented and virtual reality usually, blur is introduced in order to provide/enhance depth perception. Digital images and videos are produced every day in astonishing amounts and the demand for higher quality is constantly rising which creates a need for advanced image quality assessment. Additionally, image quality assessment is important for the performance of image processing algorithms. It has been determined that image noise and artifacts can affect the performance of algorithms such as face detection and recognition, image saliency detection, and video target tracking. Therefore, image quality assessment (IQA) has been a topic of intense research in the fields of image processing and computer vision. Since humans are the end consumers of multimedia signals, subjective quality metrics provide the most reliable results; however, their cost in addition to time requirements makes them unfeasible for practical applications. Thus, objective quality metrics are usually preferred.