Rate-Quality Optimized Video Coding


Book Description

Rate-Quality Optimized Video Coding discusses the matter of optimizing (or negotiating) the data rate of compressed digital video and its quality, which has been a relatively neglected topic in either side of image/video coding and tele-traffic management. Video rate management becomes a technically challenging task since it is required to maintain a certain video quality regardless of the availability of transmission or storage media. This is caused by the broadband nature of digital video and inherent algorithmic features of mainstream video compression schemes, e.g. H.261, H.263 and MPEG series. In order to maximize the media utilization and to enhance video quality, the data rate of compressed video should be regulated within a budget of available media resources while maintaining the video quality as high as possible. In Part I (Chapters 1 to 4) the non-stationarity of digital video is discussed. Since the non-stationary nature is also inherited from algorithmic properties of international video coding standards, which are a combination of statistical coding techniques, the video rate management techniques of these standards are explored. Although there is a series of known video rate control techniques, such as picture rate variation, frame dropping, etc., these techniques do not view the matter as an optimization between rate and quality. From the view of rate-quality optimization, the quantizer is the sole means of controling rate and quality. Thus, quantizers and quantizer control techniques are analyzed, based on the relationship of rate and quality. In Part II (Chapters 5 and 6), as a coherent approach to non-stationary video, established but still thriving nonlinear techniques are applied to video rate-quality optimization such as artificial neural networks including radical basis function networks, and fuzzy logic-based schemes. Conventional linear techniques are also described before the nonlinear techniques are explored. By using these nonlinear techniques, it is shown how they influence and tackle the rate-quality optimization problem. Finally, in Chapter 7 rate-quality optimization issues are reviewed in emerging video communication applications such as video transcoding and mobile video. This chapter discusses some new issues and prospects of rate and quality control in those technology areas. Rate-Quality Optimized Video Coding is an excellent reference and can be used for advanced courses on the topic.




Rate-Quality Optimized Video Coding


Book Description

Rate-Quality Optimized Video Coding discusses the matter of optimizing (or negotiating) the data rate of compressed digital video and its quality, which has been a relatively neglected topic in either side of image/video coding and tele-traffic management. Video rate management becomes a technically challenging task since it is required to maintain a certain video quality regardless of the availability of transmission or storage media. This is caused by the broadband nature of digital video and inherent algorithmic features of mainstream video compression schemes, e.g. H.261, H.263 and MPEG series. In order to maximize the media utilization and to enhance video quality, the data rate of compressed video should be regulated within a budget of available media resources while maintaining the video quality as high as possible. In Part I (Chapters 1 to 4) the non-stationarity of digital video is discussed. Since the non-stationary nature is also inherited from algorithmic properties of international video coding standards, which are a combination of statistical coding techniques, the video rate management techniques of these standards are explored. Although there is a series of known video rate control techniques, such as picture rate variation, frame dropping, etc., these techniques do not view the matter as an optimization between rate and quality. From the view of rate-quality optimization, the quantizer is the sole means of controling rate and quality. Thus, quantizers and quantizer control techniques are analyzed, based on the relationship of rate and quality. In Part II (Chapters 5 and 6), as a coherent approach to non-stationary video, established but still thriving nonlinear techniques are applied to video rate-quality optimization such as artificial neural networks including radical basis function networks, and fuzzy logic-based schemes. Conventional linear techniques are also described before the nonlinear techniques are explored. By using these nonlinear techniques, it is shown how they influence and tackle the rate-quality optimization problem. Finally, in Chapter 7 rate-quality optimization issues are reviewed in emerging video communication applications such as video transcoding and mobile video. This chapter discusses some new issues and prospects of rate and quality control in those technology areas. Rate-Quality Optimized Video Coding is an excellent reference and can be used for advanced courses on the topic.







Complexity Optimized Video Codecs


Book Description

Inhaltsangabe:Abstract: We are facing an increasing bandwidth in the mobile systems and this opens up for new applications in a mobile terminal. It will be possible to download, record, send and receive images and videosequences. Even if we have more bandwidth, images and video data must be compressed before it can be sent, because of the amount of information it contains. MPEG-4 and H.263 are standards for compression of video data. The problem is that encoding and decoding algorithms are computationally intensive and complexity increases with the size of the video. In mobile applications, processing capabilities such as memory space and calculation time are limited and optimized algorithms for decoding and encoding are necessary. The question is if it is possible to encode raw video data with low complexity. Single frames e.g. from a digital camera, can then be coded and transmitted as a video sequence. On the other hand, the decoder needs to be able to handle sequences with different resolution. Thus, decoder in new mobile terminals must decode higher resolution sequences with the same complexity as low resolution video requires. The work will involve literature studies of MPEG-4 and H.263. The goal is to investigate the possibility to encode video data with low complexity and to find a way for optimized downscaling of larger sequences in a decoder. The work should include - Literature studies of MPEG-4 and H.263. - Theoretical study how CIF sequences (352x288-pixel) can be downscaled to QCIF (176x144-pixel) size. - Finding of optimized algorithms for a low complexity encoder. - Implementation of such an encoder in a microprocessor, e.g. a DSP. - Complexity analysis of processing consumption. Prerequisite experience is fair C-programming, signalprocessing skills and basic knowledge in H.263 and MPEG-4 is useful. New mobile communication standards provide an increased bandwidth, which opens up for many new media applications and services in future mobile phones. Video recording using the MMS standard, video conferencing and downloading of movies from the Internet are some of those applications. Even if the data rate is high, video data needs to be compressed using international video compression standards such as MPEG-4 or H.263. Efficient video compression algorithms are the focus of this thesis. Very limited computational capabilities of the terminals require low complexity encoder and decoder. A low complexity encoder for usage with [...]




Versatile Video Coding


Book Description

Video is the main driver of bandwidth use, accounting for over 80 per cent of consumer Internet traffic. Video compression is a critical component of many of the available multimedia applications, it is necessary for storage or transmission of digital video over today's band-limited networks. The majority of this video is coded using international standards developed in collaboration with ITU-T Study Group and MPEG. The MPEG family of video coding standards begun on the early 1990s with MPEG-1, developed for video and audio storage on CD-ROMs, with support for progressive video. MPEG-2 was standardized in 1995 for applications of video on DVD, standard and high definition television, with support for interlaced and progressive video. MPEG-4 part 2, also known as MPEG-2 video, was standardized in 1999 for applications of low- bit rate multimedia on mobile platforms and the Internet, with the support of object-based or content based coding by modeling the scene as background and foreground. Since MPEG-1, the main video coding standards were based on the so-called macroblocks. However, research groups continued the work beyond the traditional video coding architectures and found that macroblocks could limit the performance of the compression when using high-resolution video. Therefore, in 2013 the high efficiency video coding (HEVC) also known and H.265, was released, with a structure similar to H.264/AVC but using coding units with more flexible partitions than the traditional macroblocks. HEVC has greater flexibility in prediction modes and transform block sizes, also it has a more sophisticated interpolation and de blocking filters. In 2006 the VC-1 was released. VC-1 is a video codec implemented by Microsoft and the Microsoft Windows Media Video (VMW) 9 and standardized by the Society of Motion Picture and Television Engineers (SMPTE). In 2017 the Joint Video Experts Team (JVET) released a call for proposals for a new video coding standard initially called Beyond the HEVC, Future Video Coding (FVC) or known as Versatile Video Coding (VVC). VVC is being built on top of HEVC for application on Standard Dynamic Range (SDR), High Dynamic Range (HDR) and 360° Video. The VVC is planned to be finalized by 2020. This book presents the new VVC, and updates on the HEVC. The book discusses the advances in lossless coding and covers the topic of screen content coding. Technical topics discussed include: Beyond the High Efficiency Video CodingHigh Efficiency Video Coding encoderScreen contentLossless and visually lossless coding algorithmsFast coding algorithmsVisual quality assessmentOther screen content coding algorithmsOverview of JPEG Series




Rate Distortion Optimization for Interprediction in H.264/AVC Video Coding


Book Description

Part 10 of MPEG-4 describes the Advanced Video Coding (AVC) method widely known as H.264. H.264 is the product of a collaborative effort known as the Joint Video Team(JVT). The final draft of the standard was completed in May of 2003 and since then H.264 has become one of the most commonly used formats for compression [1]. H.264, unlike previous standards, describes a myriad of coding options that involve variable block size inter prediction methods, nine different intra prediction modes, multi frame prediction and B frame prediction. There are a huge number of options for coding that will tend to generate a different number of coded bits and different reconstruction quality. A video encoder is challenged to minimize coded bitrate and maximize quality. However, choosing the coding mode of a macroblock to achieve this is a difficult problem due to the large number of coding combinations and parameters. Rate Distortion Optimization is an effective technique for choosing the 'best' coding mode for a macroblock. This thesis presents two features of an H.264 encoder, multi frame prediction and B frame prediction. Additionally, a Rate Distortion Optimization scheme is implemented with the features to improve overall performance of the encoder.




Recent Advances on Video Coding


Book Description

This book is intended to attract the attention of practitioners and researchers from industry and academia interested in challenging paradigms of multimedia video coding, with an emphasis on recent technical developments, cross-disciplinary tools and implementations. Given its instructional purpose, the book also overviews recently published video coding standards such as H.264/AVC and SVC from a simulational standpoint. Novel rate control schemes and cross-disciplinary tools for the optimization of diverse aspects related to video coding are also addressed in detail, along with implementation architectures specially tailored for video processing and encoding. The book concludes by exposing new advances in semantic video coding. In summary: this book serves as a technically sounding start point for early-stage researchers and developers willing to join leading-edge research on video coding, processing and multimedia transmission.




Emerging Research in Electronics, Computer Science and Technology


Book Description

PES College of Engineering is organizing an International Conference on Emerging Research in Electronics, Computer Science and Technology (ICERECT-12) in Mandya and merging the event with Golden Jubilee of the Institute. The Proceedings of the Conference presents high quality, peer reviewed articles from the field of Electronics, Computer Science and Technology. The book is a compilation of research papers from the cutting-edge technologies and it is targeted towards the scientific community actively involved in research activities.




Advanced Video Coding Systems


Book Description

This book presents an overview of the state of the art in video coding technology. Specifically, it introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AVS2; describes the key technologies used in the AVS2 standard, including prediction coding, transform coding, entropy coding, and loop-filters; examines efficient tools for scene video coding and surveillance video, and the details of a promising intelligent video coding system; discusses optimization technologies in video coding systems; provides a review of image, video, and 3D content quality assessment algorithms; surveys the hot research topics in video compression.




A Rate-distortion Optimized Multiple Description Video Codec for Error Resilient Transmission


Book Description

The demand for applications like transmission and sharing of video is ever-increasing. Although network resources (bandwidth in particular) and coverage, networking technologies, compression ratio of state-of-the-art video coders have improved, unreliability of the transmission medium prevents us from gaining the most benefit from these applications. This thesis introduces a video coder that is resilient to network failures for transmission applications by using the framework of multiple description coding (MDC). Unlike traditional video coding which compresses the video into single bitstream, in MDC the video is compressed into more than one bitstream which can be independently decoded. It not only averages out the effect of network errors over the bitstreams but it also makes it possible to utilize the multipath nature of most network topologies. An end-to-end rate-distortion optimization is proposed for the codec to make sure that the codec exhibits improved compression performance and that the descriptions are equally efficient to improve the final video quality. An optimized strategy for packetizing the compressed bitstreams of the descriptions is also proposed which guarantees that each packet is self-contained and efficient. The evaluation of the developed MD codec over simulated unreliable packet networks shows that it is possible to achieve improved resilience with the proposed strategies and the end video quality is significantly improved as a result. This is further verified with subjective evaluation over a range of different types of video test sequences.