High Impact Factor : 4.396 icon | Submit Manuscript Online icon |

Heuristic Prediction for L1 Data Cache Misses for Low Power Cache Memory to Reduce the Cache Miss Penalty


Pardeep Malik , G.L.Bajaj Institute of technology & Management


Heuristic Prediction, Low Power Cache Memory


Ever since a long time ago, memory accesses costs about half of a microprocessor system’s power consumption. Modifying a microprocessor cache’s total size, line size and associativity to a particular program request have tremendous benefits for performance and power. Customizing caches has until recently been restricted to core based flows however, several configurable cache architectures have been proposed recently for use in pre-fabricated microprocessor platforms. On chip-level techniques alone cannot further keep power dissipation under a reasonable and acceptable level. We show in this paper that many researchers have already been made efforts to reduce power dissipation at the architectural level by reducing on-chip cache power consumption which is a major power consumer in microprocessors. In this paper we propose a runtime mechanism that heuristically predicts the throughput of an application using a reconfigurable low power L1 cache. In this paper we propose on-chip hardware implementing an efficient L1 cache tuning heuristic prediction that can automatically, transparently, and dynamically reduce the cache miss penalty to an executing program. There are other approaches that follow the program flow and prefetch all target addresses in L1 data cahe including those blocks which already exist in the L1 data cache. The approach predicts the stream of next miss heuristically using genetic algorithm according to recency of data accessed by the user and then prefetches only the next miss address of the stream. It uses a general prefetching framework, two-phase prediction algorithm (TPP), that lets each stream request have its own address predictor comparing the TPP algorithm with the latest variant algorithms of stream buffers and Markov predictor. In fact, the cache miss penalty can vary almost one or two orders of magnitude because of a huge divergence between sequential and random access rate of disk. Our heuristic seeks not only to reduce the number of cache misses that must be examined, but also traverses the search space in a way that completely avoids costly cache flushes. Goal of this paper is to present the solution for prediction information in low power L1 cache in order to use heuristically the cache operation with usage Genetic algorithms.

Other Details

Paper ID: IJSRDV4I70258
Published in: Volume : 4, Issue : 7
Publication Date: 01/10/2016
Page(s): 373-377

Article Preview

Download Article