A Survey of Blur Detection and Sharpness Assessment Methods


Book Description

Blurring is almost an omnipresent effect on natural images. The main causes of blurring in images include: (a) the existence of objects at different depths within the scene which is known as defocus blur; (b) blurring due to motion either of objects in the scene or the imaging device; and (c) blurring due to atmospheric turbulence. Automatic estimation of spatially varying sharpness/blurriness has several applications including depth estimation, image quality assessment, information retrieval, image restoration, among others. There are some cases in which blur is intentionally introduced or enhanced; for example, in artistic photography and cinematography in which blur is intentionally introduced to emphasize a certain image region. Bokeh is a technique that introduces defocus blur with aesthetic purposes. Additionally, in trending applications like augmented and virtual reality usually, blur is introduced in order to provide/enhance depth perception. Digital images and videos are produced every day in astonishing amounts and the demand for higher quality is constantly rising which creates a need for advanced image quality assessment. Additionally, image quality assessment is important for the performance of image processing algorithms. It has been determined that image noise and artifacts can affect the performance of algorithms such as face detection and recognition, image saliency detection, and video target tracking. Therefore, image quality assessment (IQA) has been a topic of intense research in the fields of image processing and computer vision. Since humans are the end consumers of multimedia signals, subjective quality metrics provide the most reliable results; however, their cost in addition to time requirements makes them unfeasible for practical applications. Thus, objective quality metrics are usually preferred.




A Survey of Blur Detection and Sharpness Assessment Methods


Book Description

Blurring is almost an omnipresent effect on natural images. The main causes of blurring in images include: (a) the existence of objects at different depths within the scene which is known as defocus blur; (b) blurring due to motion either of objects in the scene or the imaging device; and (c) blurring due to atmospheric turbulence. Automatic estimation of spatially varying sharpness/blurriness has several applications including depth estimation, image quality assessment, information retrieval, image restoration, among others. There are some cases in which blur is intentionally introduced or enhanced; for example, in artistic photography and cinematography in which blur is intentionally introduced to emphasize a certain image region. Bokeh is a technique that introduces defocus blur with aesthetic purposes. Additionally, in trending applications like augmented and virtual reality usually, blur is introduced in order to provide/enhance depth perception. Digital images and videos are produced every day in astonishing amounts and the demand for higher quality is constantly rising which creates a need for advanced image quality assessment. Additionally, image quality assessment is important for the performance of image processing algorithms. It has been determined that image noise and artifacts can affect the performance of algorithms such as face detection and recognition, image saliency detection, and video target tracking. Therefore, image quality assessment (IQA) has been a topic of intense research in the fields of image processing and computer vision. Since humans are the end consumers of multimedia signals, subjective quality metrics provide the most reliable results; however, their cost in addition to time requirements makes them unfeasible for practical applications. Thus, objective quality metrics are usually preferred.




Proceedings of the 3rd International Symposium on Big Data and Cloud Computing Challenges (ISBCC – 16’)


Book Description

This proceedings volume contains selected papers that were presented in the 3rd International Symposium on Big data and Cloud Computing Challenges, 2016 held at VIT University, India on March 10 and 11. New research issues, challenges and opportunities shaping the future agenda in the field of Big Data and Cloud Computing are identified and presented throughout the book, which is intended for researchers, scholars, students, software developers and practitioners working at the forefront in their field. This book acts as a platform for exchanging ideas, setting questions for discussion, and sharing the experience in Big Data and Cloud Computing domain.​




Engineering Mathematics and Artificial Intelligence


Book Description

Explains the theory behind Machine Learning and highlights how Mathematics can be used in Artificial Intelligence Illustrates how to improve existing algorithms by using advanced mathematics and discusses how Machine Learning can support mathematical modeling Captures how to simulate data by means of artificial neural networks and offers cutting-edge Artificial Intelligence technologies Emphasizes the classification of algorithms, optimization methods, and statistical techniques Explores future integration between Machine Learning and complex mathematical techniques




Multimedia Analysis, Processing and Communications


Book Description

This book has brought 24 groups of experts and active researchers around the world together in image processing and analysis, video processing and analysis, and communications related processing, to present their newest research results, exchange latest experiences and insights, and explore future directions in these important and rapidly evolving areas. It aims at increasing the synergy between academic and industry professionals working in the related field. It focuses on the state-of-the-art research in various essential areas related to emerging technologies, standards and applications on analysis, processing, computing, and communication of multimedia information. The target audience of this book is researchers and engineers as well as graduate students working in various disciplines linked to multimedia analysis, processing and communications, e.g., computer vision, pattern recognition, information technology, image processing, and artificial intelligence. The book is also meant to a broader audience including practicing professionals working in image/video applications such as image processing, video surveillance, multimedia indexing and retrieval, and so on. We hope that the researchers, engineers, students and other professionals who read this book would find it informative, useful and inspirational toward their own work in one way or another.




Modern Image Quality Assessment


Book Description

This Lecture book is about objective image quality assessment—where the aim is to provide computational models that can automatically predict perceptual image quality. The early years of the 21st century have witnessed a tremendous growth in the use of digital images as a means for representing and communicating information. A considerable percentage of this literature is devoted to methods for improving the appearance of images, or for maintaining the appearance of images that are processed. Nevertheless, the quality of digital images, processed or otherwise, is rarely perfect. Images are subject to distortions during acquisition, compression, transmission, processing, and reproduction. To maintain, control, and enhance the quality of images, it is important for image acquisition, management, communication, and processing systems to be able to identify and quantify image quality degradations. The goals of this book are as follows; a) to introduce the fundamentals of image quality assessment, and to explain the relevant engineering problems, b) to give a broad treatment of the current state-of-the-art in image quality assessment, by describing leading algorithms that address these engineering problems, and c) to provide new directions for future research, by introducing recent models and paradigms that significantly differ from those used in the past. The book is written to be accessible to university students curious about the state-of-the-art of image quality assessment, expert industrial R&D engineers seeking to implement image/video quality assessment systems for specific applications, and academic theorists interested in developing new algorithms for image quality assessment or using existing algorithms to design or optimize other image processing applications.







JPEG2000 Image Compression Fundamentals, Standards and Practice


Book Description

This is nothing less than a totally essential reference for engineers and researchers in any field of work that involves the use of compressed imagery. Beginning with a thorough and up-to-date overview of the fundamentals of image compression, the authors move on to provide a complete description of the JPEG2000 standard. They then devote space to the implementation and exploitation of that standard. The final section describes other key image compression systems. This work has specific applications for those involved in the development of software and hardware solutions for multimedia, internet, and medical imaging applications.




Diagnostic Radiology Physics


Book Description

This publication is aimed at students and teachers involved in programmes that train medical physicists for work in diagnostic radiology. It provides a comprehensive overview of the basic medical physics knowledge required in the form of a syllabus for the practice of modern diagnostic radiology. This makes it particularly useful for graduate students and residents in medical physics programmes. The material presented in the publication has been endorsed by the major international organizations and is the foundation for academic and clinical courses in both diagnostic radiology physics and in emerging areas such as imaging in radiotherapy.




The Handbook of Medical Image Perception and Techniques


Book Description

A state-of-the-art review of key topics in medical image perception science and practice, including associated techniques, illustrations and examples. This second edition contains extensive updates and substantial new content. Written by key figures in the field, it covers a wide range of topics including signal detection, image interpretation and advanced image analysis (e.g. deep learning) techniques for interpretive and computational perception. It provides an overview of the key techniques of medical image perception and observer performance research, and includes examples and applications across clinical disciplines including radiology, pathology and oncology. A final chapter discusses the future prospects of medical image perception and assesses upcoming challenges and possibilities, enabling readers to identify new areas for research. Written for both newcomers to the field and experienced researchers and clinicians, this book provides a comprehensive reference for those interested in medical image perception as means to advance knowledge and improve human health.