GIANNAKAS Theodoros

Person has left EURECOM
  • GIANNAKAS Theodoros

Thesis

Joint Optimization of Content Caching and Recommendation for Mobile Edge Systems

1.    Summary

Caching popular content at the edge of future mobile networks (e.g. small cells or user devices) has been widely considered in order to alleviate the backhaul bottleneck resulting from very dense, small cell networks, that are deemed necessary to keep up with the data tsunami experienced. While a number of interesting techniques have been proposed that target the wireless environment explicitly, such as femto-caching and coded caching, the majority of these approaches suffer from the rather limited storage capacity of edge caches. Even if one considers the additional "global" caching gains these methods promise, the envisioned gains in practical settings are rather pessimistic, due to the tremendous and rapidly increasing size of the Internet content catalog. In this project, we propose to depart from the assumption of hard cache misses, common in most of these works, and consider soft cache misses, where alternative content, related to the original one not found locally, can be recommended possibly leading to complete or partial user satisfaction, without the need to retrieve the original content over the expensive backhaul and core network. This thesis proposes a first-of-its-kind joint consideration of caching and recommendation systems, towards facilitating mobile edge caching performance. Such a convergence is quite timely due to popular content providers envisioning becoming virtual operators, facilitated by RAN Sharing and Mobile Edge Computing architectures considered in 5G. 

2. Context

The "network side" of caching: While caching had been widely studied in peer-to-peer systems and content distribution networks (CDNs), the seminal work of femto-caching was among the first to propose caching content on a set of inexpensive edge nodes (called ``helper'') nodes, which can be, for example, femto-cells with limited backhaul capacity. In this context, the network topology and the content popularity distribution (both assumed to be known) are used to solve the optimal content placement problem: which contents to store in which cache.

A number of follow-up works within the "femto-caching" framework have appeared, considering aspects such as storage in user devices, constraints on transmission capacity per helper leading to a joint placement and routing problem, multi-layer video streaming where video quality can be traded off with hit rate, and multicast through multiple helper nodes, using LTE's eMBMS framework. A number of other aspects have also been considered in this framework, such as local popularity patterns, using social relations of users to improve prefetching, as well as cache replacement policies. These latter works use stochastic geometry methods to model the random placement of users and helper nodes and resulting topology, rather than assuming a given one. The common denominators between these works can be summarized as follows: (i) The main bottleneck is the backhaul link (ii) the transmission phase is ignored or simplified, (iii) global caching gains stem from coverage overlaps between nearby cells.

The "communications side" of caching:When multiple nearby users are requesting content at the same time, the content delivery side of the problem in a wireless setup becomes just as important as the placement problem. Recent work by Maddah-Ali and Niesen revealed quite interesting findings about the fundamental gains achievable by jointly considering caching and coded transmissions. In a simple setup with a catalog of  files,  receiving users each with a request for any one of the  files, storage at the user devices with a cache of size of  files, the authors show that the amount of time  can be reduced not only due to the local caching gain  of traditional caching schemes, but also due to a global caching gain that scales with the total size of caching memory, .

While the above results were derived in a rather stylized setting, a number of follow-up studies have tried to address various shortcomings of the original work, towards distributed implementations, non-uniform popularity distributions, replacement algorithms, hierarchical topologies, etc. Beyond these works, caching has also been considered in the context of advanced cooperative transmission techniques on the physical layer, such as CoMP. CoMP can significantly improve performance, but requires all BS involved to exchange contents, to create a virtual MIMO channel, a large overhead on the already taxed backhaul. However, if every BS involved already stores the contents requested by every user involved, then a collaborative transmission can be performed by just knowing the channel matrix. Some authors argue that the caching and transmission algorithms at each involved BS must be jointly designed in order to facilitate such CoMP opportunities. Finally, in a very recent work of ideas from coded caching are also used to derive fundamental performance bounds on the impact of caching for a simple K-user interference channel.

Beyond state-of-the-art - the "content side" of caching:In the content placement problem, the global gain comes from  small cells being in range of a user, thus increasing the effective cache size by up to . Even with high densification  is expected to be less than  in most cases. In coded caching, the global gain comes from  nearby nodes overhearing a single broadcast transmission, and again ripping gains proportional to . In practice, small cells cannot have more than  users associated. Hence, if we consider that each cache can store about  of the entire content catalogue (an already highly optimistic value), one would require orders of magnitude larger K values to obtain real gains.

The main novelty of this thesis, in terms of approach, is to depart from the assumption of hard cache misses and consider soft cache misses. A request for content X leading to a cache miss is not immediately served from the core network, but alternative contents "similar" to X and locally stored can be proposed to the user. The user might then be fully satisfied with the proposed content (in the best case), fully dissatisfied (in the worst case) leading to a miss, or partially satisfied (a new scenario). Hence, (soft) cache hits are no more binary, but may also take non-integer values (). Within this framework, the main technical novelty of this project will be a first-of-its-kind joint consideration of wireless edge caching algorithms (subject to such a soft cache hit/miss) and recommendation systems that aim to facilitate wireless edge caching systems, in case of soft cache misses.

2.    Scientific Content and Methodology

In this project, we propose a joint treatment of caching and recommendation systems for storing content on the edge of future networks. The proposed work consists of 3 main tasks:

Task 1 - Recommendation-aided Optimization of Edge Caching: Consider the basic framework of femto-caching [1], where content must be allocated in  small cells (SCs), each equipped with a cache of size  contents, during off-peak hours.  There are  total users, and each might request content  from a catalog of  total contents with a probability . Each user  is within range of a number of SCs. The relation between users and SCs is a bipartite graph }, where  denotes the set of SCs within range of user . The problem variables are  where , if SC  stores content , and  otherwise. The basic caching placement problem can be summarized as 

    (1)

The term in the parenthesis implies that if content  is requested by node , and none of the SCs within range of  store that content (i.e. ), then there is a cache miss. Assume now that if there is a miss, the system can recommend instead to user  an alternative content among the ones that are locally available. The relation between contents could be captured with a  utility matrix , where  denotes the utility a given user gets if she originally asks for content i but instead receives content j, where  and . This matrix is expected to be sparse, and in practice, it could be inferred from the related content graph (e.g. related video graph on YouTube). A very simple example would be for example to mark the  (e.g. top 10) most related videos for video , with  . The optimal caching problem can be rewritten as

    (2)

Hence, a cache hit for content  is now achieved if  is stored in SC  (i.e., if  as before) but also if , i.e. some other content  is available locally, for which  (i.e., content  is related to , and could equally satisfy the request). Within this task, we will investigate problems of this type, e.g., whether the objective remains submodular and thus amenable to greedy approximation algorithms, as well as continuous relaxations. We will further consider more realistic scenarios where  can take any values in .

Task 2 - Modeling and Impact of Content Relation Matrix U:The goal of this task would be to analytically understand the properties of related content matrix  and their impact on performance. Example 1: Each content  has on average  related contents, chosen uniformly and independently for each . Given that , one can conjecture that caching gains from alternative contents, in case of a cache miss, will be rather modest. Example 2: The  elements of each row  for which  are again chosen randomly, but now proportionally to their popularity. A content request resulting in a miss, will now have a higher chance to be satisfied by one of its related contents. However, the exact benefits depend both on the content popularity distribution and how this relates to the  entries. First, we will study real datasets for Internet content and its relations (e.g., publicly available ones for YouTube file relations) to understand both the qualitative and quantitative properties of matrix  Second, we will study analytically what specific properties of matrix U most facilitate caching algorithms. One promising direction is the Spectral Graph properties of U, which can nicely and compactly characterize macroscopic relations between many contents, which we expect to have an impact on cache gains.

Task 3 - Caching-aware Biasing for Recommendation Systems: There exists an abundance of recommendation systems and algorithms, and the topic is a fertile field for ongoing research. The investigation of novel recommendation systems per se is beyond the scope of this project. Instead, our goal in this last task is how one could modify the basic idea(s) of existing recommendation algorithms to facilitate caching gains, with a minimum footprint on the recommendation algorithm's intended semantics and goals. For example, we plan to investigate how a YouTube-like recommendation algorithm could be within the context of future cellular (e.g. 5G) and edge (e.g. MEC) architectures, so that the contents it shows as recommended don't only depend on the pure recommendation results, but are "biased" in favor of the caching algorithm. This problem could  be formulated as maximizing the objective of the caching problem, by appropriate knobs in the recommendation algorithm, while minimizing the impact on the recommendation objective (e.g. captured as a constraint on user QoE).