Spatiotemporal modeling and matching of video shots

Galmar, Eric;Huet, Benoit
ICIP 2008, 1st Workshop on Multimedia Information Retrieval: New Trends and Challenges, October 12-15, 2008, San Diego, USA

In this paper, we propose a framework to model video sequences using spatiotemporal description of video shots. Spatiotemporal volumes are extracted thanks to an efficient segmentation algorithm. Video shots are described by building an adjacency graph which models the visual properties of the volumes and the spatiotemporal relationships between them. The cost of extracting visual descriptors for the whole shot is reduced by efficiently propagating and merging region descriptors on spatiotemporal volumes. For the comparison of video shots, we propose a similarity measure which tolerates variability in the spatiotemporal representation. Promising experimental results are observed on different visual video shot categories. Spatiotemporal description of video shots. Spatiotemporal volumes are extracted thanks to an efficient segmentation algorithm. Video shots are described by building an adjacency graph which models the visual properties of the volumes and the spatiotemporal relationships between them. The cost of extracting visual descriptors for the whole shot is reduced by efficiently propagating and merging region descriptors on spatiotemporal volumes. For the comparison of video shots, we propose a similarity measure which tolerates variability in the spatiotemporal representation. Promising experimental results are observed on different visual video shot categories.


DOI
Type:
Conference
City:
San Diego
Date:
2008-10-12
Department:
Data Science
Eurecom Ref:
2577
Copyright:
© 2008 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
See also:

PERMALINK : https://www.eurecom.fr/publication/2577