GME of MPEG-4 on Multicore Processors

GME of MPEG-4 on Multicore Processors

Mahmoud Alsarayreh, Hussein Alzoubi
Copyright: © 2017 |Pages: 12
DOI: 10.4018/IJCVIP.2017100102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Multicore processor systems are leading the microprocessor industry today. This has placed more pressure upon programmers to write parallel programs that wisely balance load among cores on the same system and optimize performance. On the other hand, the video-form of data is getting more and more importance. Video is the fuel for many contemporary Internet applications like YouTube. Video is the most storage and bandwidth-hungry type of data, especially in the context of new video applications like HDTV and IPTV. Exploiting video compression manifests itself in real-time applications. In light of this, it is important to bring to practice parallelized video codecs programmed to run on multicore systems. In this paper, the authors concentrate on one aspect of the MPEG-4 video codec, the global motion estimation and compensation. They present a parallel implementation for MPEG-4 global motion estimation and compensation on multicore processors and provide a detailed performance evaluation under various scenarios.
Article Preview
Top

1. Introduction

The main process of video compression techniques is motion estimation, which is divided into two types: local and global motion (Adolph & Buschmann, 1991; Dufaux & Moscheni, 1995). Local motion describes the motion induced by the movement of objects in the scene. Global motion describes the motion caused by camera movements such as panning, tilting, rotation, and zooming. In this paper, we focus on global motion. Global motion estimation (GME) is a parametric motion model to describe and estimate the motion over the whole frame and generate the motion vector (Dufaux & Moscheni, 1995). GME has been added to recent MPEG-4 standard for video compression (Li et al., 2001). GME is considered a main process in field of object-based video applications such as video object segmentation, scene construction, and video coding.

In the recent years, parallel computations are largely used in many areas because they have been successful in achieving high computing performance. In video encoding, the idea of data partitioning is to reconstruct all frames into a number of data blocks and then map these blocks of data into the corresponding processors. The processors perform their computation in a parallel form. The parallel implementation of video encoding process increases the performance for real-time multimedia applications. Examples of models based on parallel implementations include open multi-processing (OpenMP), and message passing interface (MPI). These models also used to parallelize sequential processes to enhance the performance, while maintaining the same functionality of these processes. The OpenMP parallel programming model is an open source model for shared-memory multi-platform parallel programming in C, C++, and Fortran. For multicore architectures on shared memory, OpenMP is more suitable than other parallel programing models (OpenMP architecture review board, 2013).

There have been much research efforts in parallelizing different aspects of the modern video codecs (the codec performs coding/decoding). For example, He et al (1998) presented a scheme to parallelize the encoding processes, where each video object plan (VOP) was assigned to one group of workstations; the relationship between VOPs was synchronized using a petri nets model. The earliest deadline first (EDF) scheduling algorithm was used to allocate objects in a video session to workstations. Gunawan and Tong (2002) used a cluster computing development monitoring resource and MPI parallel programming model to improve the execution time of motion estimation. Rodriguez et al. (2004) proposed an evaluation of several parallel implementations of MPEG-4 encoder over clusters of workstations, using parallel data distribution methods. Wu and Megson (2006) proposed a parallel linear hash table motion estimation algorithm (LHMEA), LHMEA divided each reference frames into equally sized regions, these regions were processed in parallel.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 2 Issues (2016)
Volume 5: 2 Issues (2015)
Volume 4: 2 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing