Rate-Distortion Based Video Compression


Book Description

One of the most intriguing problems in video processing is the removal of the redundancy or the compression of a video signal. There are a large number of applications which depend on video compression. Data compression represents the enabling technology behind the multimedia and digital television revolution. In motion compensated lossy video compression the original video sequence is first split into three new sources of information, segmentation, motion and residual error. These three information sources are then quantized, leading to a reduced rate for their representation but also to a distorted reconstructed video sequence. After the decomposition of the original source into segmentation, mo tion and residual error information is decided, the key remaining problem is the allocation of the available bits into these three sources of information. In this monograph a theory is developed which provides a solution to this fundamental bit allocation problem. It can be applied to all quad-tree-based motion com pensated video coders which use a first order differential pulse code modulation (DPCM) scheme for the encoding of the displacement vector field (DVF) and a block-based transform scheme for the encoding of the displaced frame differ ence (DFD). An optimal motion estimator which results in the smallest DFD energy for a given bit rate for the encoding of the DVF is also a result of this theory. Such a motion estimator is used to formulate a motion compensated interpolation scheme which incorporates a global smoothness constraint for the DVF.










Advances in Multimedia Information Processing - PCM 2005


Book Description

We are delighted to welcome readers to the proceedings of the 6th Pacific-Rim Conference on Multimedia (PCM). The first PCM was held in Sydney, Australia, in 2000. Since then, it has been hosted successfully by Beijing, China, in 2001, Hsinchu, Taiwan, in 2002, Singapore in 2003, and Tokyo, Japan, in 2004, and finally Jeju, one of the most beautiful and fantastic islands in Korea. This year, we accepted 181 papers out of 570 submissions including regular and special session papers. The acceptance rate of 32% indicates our commitment to ensuring a very high-quality conference. This would not be possible without the full support of the excellent Technical Committee and anonymous reviewers that provided timely and insightful reviews. We would therefore like to thank the Program Committee and all reviewers. The program of this year reflects the current interests of the PCM’s. The accepted papers cover a range of topics, including, all aspects of multimedia, both technical and artistic perspectives and both theoretical and practical issues. The PCM 2005 program covers tutorial sessions and plenary lectures as well as regular presentations in three tracks of oral sessions and a poster session in a single track. We have tried to expand the scope of PCM to the artistic papers which need not to be strictly technical.




Improving the Rate-Distortion Performance in Distributed Video Coding


Book Description

Distributed video coding is a coding paradigm, which allows encoding of video frames at a complexity that is substantially lower than that in conventional video coding schemes. This feature makes it suitable for some emerging applications such as wireless surveillance video and mobile camera phones. In distributed video coding, a subset of frames in the video sequence, known as the key frames, are encoded using a conventional intra-frame encoder, such as H264/AVC in the intra mode, and then transmitted to the decoder. The remaining frames, known as the Wyner-Ziv frames, are encoded based on the Wyner-Ziv principle by using the channel codes, such as LDPC codes. In the transform-domain distributed video coding, each Wyner-Ziv frame undergoes a 4x4 block DCT transform and the resulting DCT coefficients are grouped into DCT bands. The bitplaines corresponding to each DCT band are encoded by a channel encoder, for example an LDPCA encoder, one after another. The resulting error-correcting bits are retained in a buffer at the encoder and transmitted incrementally as needed by the decoder. At the decoder, the key frames are first decoded. The decoded key frames are then used to generate a side information frame as an initial estimate of the corresponding Wyner-Ziv frame, usually by employing an interpolation method. The difference between the DCT band in the side information frame and the corresponding one in the Wyner-Ziv frame, referred to as the correlation noise, is often modeled by Laplacian distribution. A soft-input information for each bit in the bitplane is obtained using this correlation noise model and the corresponding DCT band of the side information frame. The channel decoder then uses this soft-input information along with some error-correcting bits sent by the encoder to decode the bitplanes of each DCT band in each of the Wyner-Ziv frames. Hence, an accurate estimation of the correlation noise model parameter(s) and generation of high-quality side information are required for reliable soft-input information for the bitplanes in the decoder, which in turn leads to a more efficient decoding. Consequently, less error-correcting bits need to be transmitted from the encoder to the decoder to decode the bitplanes, leading to a better compression efficiency and rate-distortion performance. The correlation noise is not stationary and its statistics vary within each Wyner-Ziv frame and within its corresponding DCT bands. Hence, it is difficult to find an accurate model for the correlation noise and estimate its parameters precisely at the decoder. Moreover, in existing schemes the parameters of the correlation noise for each DCT band are estimated before the decoder starts to decode the bitplanes of that DCT band and they are not modified and kept unchanged during decoding process of the bitplanes. Another problem of concern is that, since side information frame is generated in the decoder using the temporal interpolation between the previously decoded frames, the quality of the side information frames is generally poor when the motions between the frames are non-linear. Hence, generating a high-quality side information is a challenging problem. This thesis is concerned with the study of accurate estimation of correlation noise model parameters and increasing in the quality of the side information from the standpoint of improving the rate-distortion performance in distributed video coding. A new scheme is proposed for the estimation of the correlation noise parameters wherein the decoder decodes simultaneously all the bitplanes of a DCT band in a Wyner-Ziv frame and then refines the parameters of the correlation noise model of the band in an iterative manner. This process is carried out on an augmented factor graph using a new recursive message passing algorithm, with the side information generated and kept unchanged during the decoding of the Wyner-Ziv frame. Extensive simulations are carried out showing that the proposed decoder leads to an improved rate-distortion performance in comparison to the original DISCOVER codec and in another DVC codec employing side information frame refinement, particularly for video sequences with high motion content. In the second part of this work, a new algorithm for the generation of the side information is proposed to refine the initial side information frame using the additional information obtained after decoding the previous DCT bands of a Wyner-Ziv frame. The simulations are carried out demonstrating that the proposed algorithm provides a performance superior to that of schemes employing the other side information refinement mechanisms. Finally, it is shown that incorporating the proposed algorithm for refining the side information into the decoder proposed in the first part of the thesis leads to a further improvement in the rate-distortion performance of the DVC codec.




Video Coding with Adaptive Vector Quantization and Rate Distortion Optimization


Book Description

The object of this dissertation is to investigate rate-distortion optimization and to evaluate the prospects of adaptive vector quantization for digital video compression. Rate-distortion optimization aims to improve compression performance using discrete optimization algorithms. We first describe and classify algorithms that have been developed in the literature to date. One algorithms is extended in order to make it generally applicable; the correctness of this new procedure is proven. Moreover, we compare the complexity of the aforesaid algorithms, first implementation-independent and then by run-time experiments. Finally, we propose a technique to speed up one of the aforementioned algorithms. Adaptive vector quantization enables adaption to sources with unknown or non-stationary statistics. This feature is important for digital video data since the statistics of two subsequent frames is usually similar, but in the long run the general statistics of frames may change even if scene changes are neglected. We examine combinations of adaptive vector quantization with various state-of-the-art video compression techniques. First we present an adaptive vector quantization based codec that is able to encode and decode in real-time using current PC technology. This codec is rate-distortion optimized and adaptive vector quantization is applied in the wavelet transform domain. The organization of the wavelet coefficients is then made more efficient using adaptive partition techniques. Moreover, the main adaptability mechanism of adaptive vector quantization, the so-called codebook update, is studied. Finally, a combination of adaptive vector quantization and motion compensation is taken into consideration. We show that for very low bitrates adaptive vector quantization performs on prediction residual frames better or at least as well as discrete cosine transform coding.




Issues in Applied Computing: 2011 Edition


Book Description

Issues in Applied Computing / 2011 Edition is a ScholarlyEditions™ eBook that delivers timely, authoritative, and comprehensive information about Applied Computing. The editors have built Issues in Applied Computing: 2011 Edition on the vast information databases of ScholarlyNews.™ You can expect the information about Applied Computing in this eBook to be deeper than what you can access anywhere else, as well as consistently reliable, authoritative, informed, and relevant. The content of Issues in Applied Computing: 2011 Edition has been produced by the world’s leading scientists, engineers, analysts, research institutions, and companies. All of the content is from peer-reviewed sources, and all of it is written, assembled, and edited by the editors at ScholarlyEditions™ and available exclusively from us. You now have a source you can cite with authority, confidence, and credibility. More information is available at http://www.ScholarlyEditions.com/.