Cost Evaluation of Synchronization Algorithms for Multicore Architectures

Cost Evaluation of Synchronization Algorithms for Multicore Architectures

Masoud Hemmatpour, Renato Ferrero, Filippo Gandino, Bartolomeo Montrucchio, Maurizio Rebaudengo
Copyright: © 2018 |Pages: 15
DOI: 10.4018/978-1-5225-2255-3.ch346
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In a multicore environment, a major focus is represented by the synchronization among threads and processes. Since synchronization mechanisms strongly affect the performance of multithread algorithms, the selection of an effective synchronization approach is critical for multicore environments. In this chapter, the cost of the main existing synchronization techniques is estimated. The current investigation covers both hardware and software solutions. A comparative analysis highlights benefits and drawbacks of the considered approaches. The results are intended to represent a useful aid for researchers and practitioners interested in optimization of parallel algorithms.
Chapter Preview
Top

Background

When threads are working simultaneously on a shared object, their synchronization should be managed properly, otherwise the instructions of different threads interleave on the shared object in a wrong way. For example, Figure 1 shows the program order of two threads that are working on the shared object counter (Silberschatz, 2006). Since one thread is incrementing the counter and another one is decrementing it, at the end, the counter is expected to have the initial value. However, as Figure 1 illustrates, there is a possible execution order of instructions that leads to an incorrect result.

Figure 1.

Incorrect execution of the instructions order

978-1-5225-2255-3.ch346.f01

Synchronization mechanisms are used to avoid the problematic interleaving instructions. The part of the code that accesses to the shared object is called critical section. The critical section should be protected by synchronization primitives to avoid concurrent access to the shared object:

Key Terms in this Chapter

Memory Barrier: An operation to avoid the reordering of instructions.

Multicore Architecture: A processor with two or more cores, i.e., independent processing units.

Performance: Number of operations in the unit of measure.

Race Condition: Attempt to read and write a shared object, by more than one thread with an undefined behavior.

Synchronization: A technique for coordinating threads or processes to have appropriate execution order.

Spinning: The act of querying (or in some cases modifying) an object, and waiting till desired content is achieved, before entering into the critical section.

Critical Section: A part of the code that access to the shared object.

Complete Chapter List

Search this Book:
Reset