To provide the required amount of storage and bandwidth, a video server typically comprises a large number of disks. As the total number of disks increases, the influence of the striping algorithm that determines how video data are distributed across the disks becomes decisive in terms of overall server cost and performance. Also introducing fault-tolerance against disk failures becomes a must. In this paper, we first evaluate different striping algorithms in terms of throughput, buffer requirement, and start-up latency for a non-fault-tolerant server. We then examine the impact of data striping on a fault-tolerant server and show that the striping policy and the optimal technique to assure fault-tolerance are related: Depending on the technique used to assure fault-tolerance (mirroring or parity), different striping techniques perform best.
Data striping and reliablity aspects in distributed video servers
Cluster Computing: Networks, The Journal of Networks, Software Tools and Applications, Volume 2, N°1, March 1999
© Springer. Personal use of this material is permitted. The definitive version of this paper was published in Cluster Computing: Networks, The Journal of Networks, Software Tools and Applications, Volume 2, N°1, March 1999 and is available at : http://dx.doi.org/10.1023/A:1019054003646
PERMALINK : https://www.eurecom.fr/publication/282