Cache policies to minimize the content retrieval cost have been studied through competitive analysis when the miss costs are additive and the sequence of content requests is arbitrary. More recently, a cache utility maximization problem has been introduced, where contents have stationary popularities and utilities are strictly concave in the hit rates. This paper bridges the two formulations, considering linear costs and content popularities. We show that minimizing the retrieval cost corresponds to solving an online knapsack problem, and we propose new dynamic policies inspired by simulated annealing, including DynqLRU, a variant of qLRU. For such policies we prove asymptotic convergence to the optimum under the characteristic time approximation. In a real scenario, popularities vary over time and their estimation is very difficult. DynqLRU does not require popularity estimation, and our realistic, trace-driven evaluation shows that it significantly outperforms state-of-the-art policies, with up to 45% cost reduction.
Cache policies for linear utility maximization
Research Report RR-9010, January 2017
Type:
Report
Date:
2017-01-24
Department:
Data Science
Eurecom Ref:
5122
Copyright:
© INRIA. Personal use of this material is permitted. The definitive version of this paper was published in Research Report RR-9010, January 2017 and is available at :
See also:
PERMALINK : https://www.eurecom.fr/publication/5122