Evaluation remains an important difficulty in the development of video summarization systems. Rigorous evaluation of video summaries generated by automatic systems is a complicated process because the ground truth is often difficult to define, and even when it exists, it is difficult to match with the obtain results. The TRECVID BBC evaluation campaign has recently introduced a rushes summarization task and has defined a manual evaluation methodology. In this paper, we explore the use of machine learning techniques to automate this evaluation. We present our approach and describe the current results, in comparison with manual evaluations performed in the 2007 campaign
Automatic evaluation method for rushes summarization: experimentation and analysis
CBMI 2008, 6th International Workshop on Content-Based Multimedia Indexing, June 18-20, 2008, London, UK
© 2008 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
PERMALINK : https://www.eurecom.fr/publication/2460