Efficient Predictive Algorithms for Image Compression


Book Description

This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is investigated for compression of Light Field images based on the HEVC technology. A new linear prediction method using sparse constraints is also described, enabling improved coding performance of the HEVC standard, particularly for images with complex textures based on repeated structures. Finally, the authors present a new, generalized intra-prediction framework for the HEVC standard, which unifies the directional prediction methods used in the current video compression standards, with linear prediction methods using sparse constraints. Experimental results for the compression of natural images are provided, demonstrating the advantage of the unified prediction framework over the traditional directional prediction modes used in HEVC standard.




Research Anthology on Telemedicine Efficacy, Adoption, and Impact on Healthcare Delivery


Book Description

Telemedicine, which involves electronic communications and software, provides the same clinical services to patients without the requirement of an in-person visit. Essentially, this is considered remote healthcare. Though telemedicine is not a new practice, it has become an increasingly popular form of healthcare delivery due to current events, including the COVID-19 pandemic. Not only are visits being moved onto virtual platforms, but additional materials and correspondence can remain in the digital sphere. Virtual lab results, digital imaging, medical diagnosis, and video consultations are just a few examples that encompass how telemedicine can be used for increased accessibility in healthcare delivery. With telemedicine being used in both the diagnosis and treatment of patients, technology in healthcare can be implemented at almost any phase of the patient experience. As healthcare delivery follows the digital shift, it is important to understand the technologies, benefits and challenges, and overall impacts of the remote healthcare experience. The Research Anthology on Telemedicine Efficacy, Adoption, and Impact on Healthcare Delivery presents the latest research on best practices for adopting telehealth into medical practices and its efficacy and solutions for the improvement of telemedicine, as well as addresses emerging challenges and opportunities, including issues such as securing patient data and providing healthcare accessibility to rural populations. Covering important themes that include doctor-patient relationships, tele-wound monitoring, and telemedicine regulations, this book is essential for healthcare professionals, doctors, medical students, academic and medical libraries, medical technologists, practitioners, stakeholders, researchers, academicians, and students interested in the emerging technological developments and solutions within the field of telemedicine.




Advanced Computer and Communication Engineering Technology


Book Description

This book covers diverse aspects of advanced computer and communication engineering, focusing specifically on industrial and manufacturing theory and applications of electronics, communications, computing and information technology. Experts in research, industry, and academia present the latest developments in technology, describe applications involving cutting-edge communication and computer systems and explore likely future directions. In addition, access is offered to numerous new algorithms that assist in solving computer and communication engineering problems. The book is based on presentations delivered at ICOCOE 2014, the 1st International Conference on Communication and Computer Engineering. It will appeal to a wide range of professionals in the field, including telecommunication engineers, computer engineers and scientists, researchers, academics and students.




Histopathological Image Analysis in Medical Decision Making


Book Description

Medical imaging technologies play a significant role in visualization and interpretation methods in medical diagnosis and practice using decision making, pattern classification, diagnosis, and learning. Progressions in the field of medical imaging lead to interdisciplinary discovery in microscopic image processing and computer-assisted diagnosis systems, and aids physicians in the diagnosis and early detection of diseases. Histopathological Image Analysis in Medical Decision Making provides emerging research exploring the theoretical and practical applications of image technologies and feature extraction procedures within the medical field. Featuring coverage on a broad range of topics such as image classification, digital image analysis, and prediction methods, this book is ideally designed for medical professionals, system engineers, medical students, researchers, and medical practitioners seeking current research on problem-oriented processing techniques in imaging technologies.




Discrete Cosine Transform, Second Edition


Book Description

Many new DCT-like transforms have been proposed since the first edition of this book. For example, the integer DCT that yields integer transform coefficients, the directional DCT to take advantage of several directions of the image and the steerable DCT. The advent of higher dimensional frames such as UHDTV and 4K-TV demand for small and large transform blocks to encode small or large similar areas respectively in an efficient way. Therefore, a new updated book on DCT, adapted to the modern days, considering the new advances in this area and targeted for students, researchers and the industry is a necessity.




Advanced Informatics for Computing Research


Book Description

This two-volume set (CCIS 955 and CCIS 956) constitutes the refereed proceedings of the Second International Conference on Advanced Informatics for Computing Research, ICAICR 2018, held in Shimla, India, in July 2018. The 122 revised full papers presented were carefully reviewed and selected from 427 submissions. The papers are organized in topical sections on computing methodologies; hardware; information systems; networks; security and privacy; computing methodologies.




Artificial Intelligence


Book Description

This book constitutes the refereed proceedings of the Second International Conference, SLAAI-ICAI 2018, held in Moratuwa, Sri Lanka, in December 2018. The 32 revised full papers presented were carefully reviewed and selected from numerous submissions. The papers are organized in the following topical sections: ​intelligence systems; neural networks; game theory; ontology engineering; natural language processing; agent based system; signal and image processing.




Content Computing


Book Description

Welcome to the Advanced Workshop on Content Computing 2004. The focus of this workshop was "Content Computing". It emphasized research areas that facilitate efficient, appropriate dissemination of content to users with the necessary access rights. We use the word "content" instead of "information" or "data" because we want to cover not only raw data but also presentation quality. The fast growth of the Internet has already made it the key infrastructure for information dissemination,education,business and entertainment. While the client-server model has been the most widely adopted paradigm for the WWW, the desire to provide more value-added services in the delivery layer has led to the concept of an active network, where content-driven, intelligent computation will be performed to provide quality-of-service for content presentation and best-?t client demand. These value-added services typically aim to enhance information security, provide pervasive Internet access, and improve application robustness, system/network performance, knowledge extraction,etc. They are realized by incorporating sophisticated mechanisms at the delivery layer,which is transparent to the content providers and Web surfers. Consequently, the notion of "Content Computing" has emerged. Content computing is a new paradigm for coordinating distributed systems and intelligent networks, based on a peer-to-peer model and with value-added processing of the application-specific contents at the - livery layer. This paradigm is especially useful to pervasive lightweight client devices such as mobile and portable end-user terminals with a wide variation of hardware/software configurations. This year, the workshop was held in Zhenjiang, Jiangsu, China. We received 194 high-quality papers from 11 regions, namely PR China, Korea, Singapore, Japan, United States, Canada, Australia, Germany, Taiwan, Italy, and Hong Kong. Totally, 62 papers were accepted and presented in the workshop.




Efficient Algorithms for MPEG Video Compression


Book Description

Video compression is the enabling technology behind many cutting-edge business and Internet applications, including video-conferencing, video-on-demand, and digital cable TV. Coauthored by internationally recognized authorities on the subject, this book takes a close look at the essential tools of video compression, exploring some of the most promising algorithms for converting raw data to a compressed form.




Digital Image Compression Techniques


Book Description

In order to utilize digital images effectively, specific techniques are needed to reduce the number of bits required for their representation. This Tutorial Text provides the groundwork for understanding these image compression tecniques and presents a number of different schemes that have proven useful. The algorithms discussed in this book are concerned mainly with the compression of still-frame, continuous-tone, monochrome and color images, but some of the techniques, such as arithmetic coding, have found widespread use in the compression of bilevel images. Both lossless (bit-preserving) and lossy techniques are considered. A detailed description of the compression algorithm proposed as the world standard (the JPEG baseline algorithm) is provided. The book contains approximately 30 pages of reconstructed and error images illustrating the effect of each compression technique on a consistent image set, thus allowing for a direct comparison of bit rates and reconstucted image quality. For each algorithm, issues such as quality vs. bit rate, implementation complexity, and susceptibility to channel errors are considered.