A New Eager Replication Approach Using a Non-Blocking Protocol Over a Decentralized P2P Architecture

A New Eager Replication Approach Using a Non-Blocking Protocol Over a Decentralized P2P Architecture

Katembo Kituta Ezéchiel, Shri Kant Ojha, Ruchi Agarwal
Copyright: © 2020 |Pages: 32
DOI: 10.4018/IJDST.2020040106
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Eager replication of distributed databases over a decentralized Peer-to-Peer (P2P) network is often likely to generate unreliability because participants can be or cannot be available. Moreover, the conflict between transactions initiated by different peers to modify the same data is probable. These problems are responsible of perpetual transaction abortion. Thus, a new Four-Phase-Commit (4PC) protocol that allows transaction commitment with available peers and recovering unavailable peers when they become available again has been designed using the nested transactions and the distributed voting technique. After implementing the new algorithm with C#, experiments made it possible to analyse the performance which revealed that the new algorithm is efficient because in one second it can replicate a considerable number of records, such as when an important volume of data can be queued for subsequent recovery of the concerned slave peers when they become available again.
Article Preview
Top

Introduction

When using a centralized computing system, i.e. where data are stored on a central site (server) to receive all data modification requests from other sites (clients), usually unavailability and unreliability of data remain at the rendezvous. This is due to the fact that any time a hardware or a software failure of the central site, or even a connection problem or central site overload is likely to interrupt the access of client sites to centralized data (Nicoleta-Magdalena, 2011; Maarten and Andrew, 2016). However, to avoid this problem, the use of decentralized or distributed approach, whose data duplication (or replication) is the primary input to industrially produce data availability and reliability, is necessary (Özsu and Valduriez, 2011; Fatos et al., 2012).

Currently, the replication is the only technique that ensures the exchange of data between copies in distributed systems. As for the distributed databases, replication uses four strategies which result from the combination of two factors: “when” and “where”. The “when” factor specifies when updates are broadcasted to other copies; this can occur in real-time (synchronously/eager replication) or in near real-time (asynchronously/lazy replication), while the “where” factor indicate where updates occur, on a centralized site (primary copy/mono-master) or on decentralized sites (everywhere/multi-master), before being propagated to other copies. So, it results in four strategies, namely: eager centralized site, eager decentralized sites, lazy centralized site and lazy decentralized sites (Özsu and Valduriez, 2011; Fatos et al., 2012; Spaho et al., 2015). Once implemented, these strategies ensure data availability and reliability, each according to its technicality.

Nevertheless, nowadays, distributed systems, in general and distributed databases in particular, are experiencing a migration toward Peer-to-Peer (P2P) systems. In these last, each peer is supposed to store the entire database (Özsu and Valduriez, 2011), i.e. the database is fully replicated and participants (peers) are present or absent momentarily (Vu et al. 2009). However, in view of the above with respect to replication strategies, it is clear that it is the lazy or asynchronous strategy which is appropriate for P2P topology. It allows replicas of various sites to diverge for a given moment. So, updates propagation can be applicable on the present peers while the absent peers will remain with non-updated replicas in order to receive their updates when they become available again (Kituta et al., 2018).

But, in order to preserve data availability and reliability in replicated databases systems, replicas must reflect consistency at all times. However, to ensure strong copy consistency, the traditional technique is to avoid inconsistencies by implementing a synchronous or eager refresh algorithm which is specially Two-Phase-Commit (2PC) based (Özsu and Valduriez, 2011). Unfortunately for P2P replication this strategy is candidate to the major problem which is so that if there is an unavailable site during updates propagation by the master site, the transaction cannot commit. Beyond the unavailability of a site, concurrent execution makes the temporal complexity of a transaction very high. When there is an interlock between transactions that attempt to update the same copy, the wait time for the transaction to be performed is usually high and if it expires the transaction abort like if the site was unavailable.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 2 Issues (2023)
Volume 13: 8 Issues (2022)
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing