Effectiveness Evaluation of Replacement Policies for On-Chip Caches in Multiprocessors

Effectiveness Evaluation of Replacement Policies for On-Chip Caches in Multiprocessors

Jobin Jose, Shameedha Begum, Ramasubrmanian N.
DOI: 10.4018/IJERTCS.289202
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

A Cache plays a vital role in improving performance in the multicore environment, especially the Last Level Cache (LLC). The improvements in performance are based on the block size, associativity, and replacement policies. Most of the papers concentrate on traditional Least Recently Used (LRU) based replacement policies for their replacement decisions. Unfortunately, the replacement decisions do not enhance performance of the cache as expected. An enhanced modified Pseudo LRU policy is proposed, which is an approximation of LRU. The proposed methodology uses counters to enhance the confidence of replacement decisions based on the history of the replaceable blocks in cache. It is very clear from the Simulation results that the replacement scheme proposed exhibits better performance improvement in terms of miss ratio of about 3% and energy efficiency of about 2% on an average.
Article Preview
Top

Introduction

The processing power and energy dissipation are the major concerns nowadays in the multi-core architecture and thus leads the technology front into a new dimension called Low power VLSI (Hanson, H. et. al (June 2003)). As in the case, multiple processing cores have to perform their operation simultaneously or in parallel that requires a need for a large amount of memory, as the capability of memory doesn't increase in phase with the multicore chips that lead to access limitations to processing cores. The current researches are focusing towards the effort to design memory hierarchies that would be efficient (Laha et. al (2002). These designs eliminate the gap in the performance of various cache levels (Asaduzzaman, A. et. al (2009)) thus reducing the cache level accesses, in addition to it the time on an average required to access the memory is also quantifiably reduced, thus improving the performance of execution. Deep investigations are performed by numerous researches on Level 2 (L2) caches for various reasons (Patidar, K., 2015). The introduction of the new level enables some number of hits in the cache level L2 which actually follows the same data misses in the level L2 of the cache, which is hidden (Flautner and K., et. al (May 2002g)). The cache helps to utilize parallelism at the instruction level (Carr, S. (1996)) along with the processor, to achieve execution of instructions in out of order mode, there by determining the cache lines no blocking event. So, it is seriously hard to hide the miss penalty of L2 cache. In the case of considering the optimizations of L1, that result in shorter hit times, is complex on the grounds of execution time in comparison to the hits in L2 cache (Anjana, J. G., and Prasanth, M. (2014, May)). Replacement policy is an important element, that is related to all cache levels in a multilevel hierarchy of memory. The efficiency of a replacement depends on mainly, the hit rate or miss rate and sometimes the access latency, bandwidth utilization and response time of a cache system.

The lower most level in the hierarchy of the fastest memory cache is termed as Last level cache (LLC) in modern processors and allows efficient optimizations on the cache with smarter replacement policies, because these are the major bottlenecks in improving the performance which are being shared amongst the cores. Since LLCs are considered as the highly associative cache (Gutierrez, A. et. al (2014)), whose size is large compared to other level caches, the replacement policy becomes the key technique to optimize the performance in terms of hits and misses, also ensures effective utilization with much more energy efficiency.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 2 Issues (2018)
Volume 8: 2 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing