Most of the caching algorithms are oblivious to requests' timescale, but caching systems are capacity constrained and, in practical cases, the hit rate may be limited by the cache's impossibility to serve requests fast enough. In particular, the hard-disk access time can be the key factor capping cache performance. In this article, we present a new cache replacement policy that takes advantage of a hierarchical caching architecture, and in particular of access-time difference between memory and disk. Our policy is optimal when requests follow the independent reference model and significantly reduces the hard-disk load, as shown also by our realistic, trace-driven evaluation. Moreover, we show that our policy can be considered in a more general context, since it can be easily adapted to minimize any retrieval cost, as far as costs add over cache misses.
Access-time-aware cache algorithms
ACM Transactions on Modeling and Performance Evaluation of Computing Systems (TOMPECS), Vol.2, N°4, November 2017
Type:
Journal
Date:
2017-11-21
Department:
Data Science
Eurecom Ref:
5393
Copyright:
© ACM, 2017. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Modeling and Performance Evaluation of Computing Systems (TOMPECS), Vol.2, N°4, November 2017 http://dx.doi.org/10.1145/3149001
See also:
PERMALINK : https://www.eurecom.fr/publication/5393