Data redundancy and maintenance for peer-to-peer file backup systems

Duminuco, Alessandro




The amount of digital data produced by users, such as photos, videos, and digital documents, has grown tremendously over the last decade. These data are very valuable and need to be backed up safely. Solutions based on DVDs and external hard drives, though very common, are not practical and do not provide the required level of reliability, while centralized solutions are costly. For this reason the research community has shown an increasing interest in the use of peer-topeer systems for file backup. The key property that makes peer-to-peer systems appealing is self-scaling, i.e. as more peers become part of the system the service capacity increases along with the service demand. The design of a peer-to-peer file backup system is a complex task and presents a considerable number of challenges. Peers can be intermittently connected or can fail at a rate that is considerably higher than in the case of centralized storage systems. Our interest focused particularly on how to efficiently provide reliable storage of data applying appropriate redundancy schemes and adopting the right mechanisms to maintain this redundancy. This task is not trivial since data maintenance in such systems may require significant resources in terms of storage space and communication bandwidth. Our contribution is twofold. First, we study erasure coding redundancy schemes able to combine the bandwidth efficiency of replication with the storage efficiency of classical erasure codes. In particular, we introduce and analyze two new classes of codes, namely Regenerating Codes and Hierarchical Codes. Second, we propose a proactive adaptive repair scheme, which combines the adaptiveness of reactive systems with the smooth bandwidth usage of proactive systems, generalizing the two existing approaches.

Sécurité numérique
Eurecom Ref:
© TELECOM ParisTech. Personal use of this material is permitted. The definitive version of this paper was published in Thesis and is available at :