We consider in-memory key-value stores used as caches, and their elastic provisioning in the cloud. The cost associated to such caches not only includes the storage cost, but also the cost due to misses: in fact, the cache miss ratio has a direct impact on the performance perceived by end users, and this directly affects the overall revenues for content providers. Our aim is to adapt dynamically the number of caches based on the traffic pattern, to minimize the overall costs.We present a dynamic algorithm for TTL caches whose goal is to obtain close-to-minimal costs. We then propose a practical implementation with limited computational complexity: our scheme requires constant overhead per request independently from the cache size. Using real-world traces collected from the Akamai content delivery network, we show that our solution achieves significant cost savings specially in highly dynamic settings that are likely to require elastic cloud services.
TTL-based cloud caches
INFOCOM 2019, IEEE International Conference on Computer Communications, 29 April-2 May 2019, Paris, France
© 2019 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
PERMALINK : https://www.eurecom.fr/publication/5925