Distributed Video Coding for Video Communication on Mobile Devices and Sensors

Distributed Video Coding for Video Communication on Mobile Devices and Sensors

Peter Lambert, Stefaan Mys, Jozef Škorupa, Jürgen Slowack, Rik Van de Walle, Christos Grecos
DOI: 10.4018/978-1-61520-761-9.ch020
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In the context of digital video coding, recent insights have led to a new video coding paradigm called Distributed Video Coding, or DVC, characterized by low-complexity encoding and high-complexity decoding, which is in contrast to traditional video coding schemes. This chapter provides a detailed overview of DVC by explaining the underlying principles and results from information theory and introduces a number of application scenarios. It also discusses the most important practical architectures that are currently available. One of these architectures is analyzed step-by-step to provide further details of the functional building blocks, including an analysis of the coding performance compared to traditional coding schemes. Next to this, it is demonstrated that the computational complexity in a video coding scheme can be shifted dynamically from the encoder to the decoder and vice versa by combining conventional and distributed video coding techniques. Lastly, this chapter discusses some currently important research topics of which it is expected that they can further enhance the performance of DVC, i.e., side information generation, virtual channel noise estimation, and new coding modes.
Chapter Preview
Top

Introduction

In traditional video coding schemes, such as MPEG-2, H.264/AVC, or VC-1, it is the encoder that exploits the statistics of the source signal. As a result, encoding requires significantly more computational resources than decoding, which very well suits traditional application scenarios like broadcasting or video-on-demand, where video is compressed once and decoded many times.

However, emerging applications such as wireless low-power video surveillance, video conferencing with mobile devices, or video communications in sensor networks, require ultra low-complexity encoders, possibly at the expense of a more complex decoder.

Surprisingly, results from information theory established in the 1970s suggest that this should be possible without losing any coding efficiency. In the context of digital video coding, these insights have led to a new video coding paradigm called Distributed Video Coding (DVC), which is based on Distributed Source Coding (DSC), and characterized by low-complexity encoding and high-complexity decoding.

Distributed Source Coding

DSC is a coding paradigm based on two major results from information theory: the Slepian-Wolf theorem and the Wyner-Ziv theorem. Slepian and Wolf (1973) proved that two correlated random sequences generated by repeated independent drawings of a pair of discrete random variables X and Y can be coded as efficiently by two independent coders as by a joint encoder, provided that the resulting bit streams are jointly decoded (Figure 1). In particular, this result states that RX+RYH(X,Y), RX+RYH(X|Y), and RYH(Y|X). This means that the sum of the rates of the sources X and Y can indeed achieve the joint entropy, just as for joint encoding (Figure 2).

Figure 1.

Joint source coding (left) vs. distributed source coding (right)

978-1-61520-761-9.ch020.f01
Figure 2.

Achievable rate regions for the coding schemes from Figure 1

978-1-61520-761-9.ch020.f02

A special case of DSC is when a decoder makes use of so-called side information. Here, the source sequence X is correlated with some side information Y which is unavailable at the encoder, but available at the decoder (Figure 3). Since conventional encoding techniques can code Y at a rate RY=H(Y), the above results indicate that RX=H(X|Y) is achievable. This case will be the starting point for DVC architectures, as discussed later in this chapter.

Figure 3.

Slepian-Wolf coding with side information at the decoder

978-1-61520-761-9.ch020.f03

The work of Slepian and Wolf, which involved lossless compression, was extended to lossy compression by Wyner and Ziv (1976). They considered compression with decoder side information. This time, however, a distortion D=E[d(X,X’)] between the original signal X and the decoded signal X’ is allowed. Let 978-1-61520-761-9.ch020.m01 be the achievable lower bound for the bit rate given a distortion D, and RX|Y(D) the rate required in case the side information is available at the encoder as well.

Given these notations, Wyner and Ziv (1976) proved that a rate loss 978-1-61520-761-9.ch020.m02 occurs when the encoder does not have access to the side information. More importantly, they also proved that the equality holds in the case of Gaussian memoryless sources and a mean squared error distortion metric d. Later, these results were extended to more general cases, proving that the equality also holds for source sequences X that are the sum of arbitrarily distributed side information Y and independent Gaussian noise N (Pradhan, 2003), and that the rate loss for sources with general statistics and a mean squared error distortion metric d is less than 0.5 bits per sample (Zamir, 1996).

Complete Chapter List

Search this Book:
Reset