IRIM at TRECVID 2014: Semantic indexing and instance search

Ballas, Nicolas; Labbé, Benjamin; Le Borgne, Hervé; Gosselin, Philippe; Picard, David; Redi, Miriam; Mérialdo, Bernard; Mansencal, Boris; Benois-Pineau, Jenny; Ayache, Stéphane; Hamadi, Abdelkader; Safadi, Bahjat; Derbas, Nadia; Budnik, Mateusz; Quénot, Georges; Gao, Boyang; Zhu, Chao; Tang, Yuxing; Dellandrea, Emmanuel; Bichot, Charles-Edmond; Chen, Liming; Benoit, Alexandre; Lambert, Patrick; Strat, Tiberius
TRECVID 2014, 18th International Workshop on Video Retrieval Evaluation, 10-12 November 2014, Orlando, USA

The IRIM group is a consortium of French teams supported by the GDR ISIS and working on Multimedia Indexing and Retrieval. This paper describes its participation to the TRECVID 2014 semantic indexing (SIN) and instance search (INS) tasks. For the semantic indexing task, our approach uses a six-stages processing pipelines for computing scores for the likelihood of a video shot to contain a target concept. These scores are then used for producing a ranked list of images or shots that are the most likely to contain the target concept. The pipeline is composed of the following steps: descriptor extraction, descriptor optimization, classification, fusion of descriptor variants, higher-level fusion, and re-ranking. We evaluated a number of different descriptors and tried different fusion strategies. The best IRIM run has a Mean Inferred Average Precision of 0.2796, which ranked us 5th out of 15 participants. For INS 2014 task IRIM participation, the classical BoW approach was followed, trained only with east-enders dataset. Shot signatures were computed on one key frame, or several key frames (at 1fps) and average pooling. A dissimilarity, computing a distance only for words present in query, was tested. A saliency map, build from object ROI to incorporate background context, was tried. Late fusion of two individual BoW results, with different detectors/descriptors (Hessian-Affine/SIFT and Harris-Laplace/Opponent SIFT), was used. The four submitted runs were the following: - Run F_D_IRIM_1 was the late fusion of BOW with SIFT, dissimilarity L2p, on several key frames per shot, with context for queries, and BOW with Opponent SIFT, dissimilarity L1p, on one key frame per shot. - Run F_D_IRIM_2 was similar to F_D_IRIM_1 but context for queries used also for second BoW. - Run F_D_IRIM_3 was similar to F_D_IRIM_1 but no context for queries used. - Run F_D_IRIM_4 was similar to F_D_IRIM_2 but using delta1 dissimilarity [46] (from INS 2013 best run). We found that extracting several key frames per shot coupled with average pooling improved results. We confirmed than including context in queries was also beneficial. Surprisingly, our dissimilarity performed better than delta.                     

Data Science
Eurecom Ref:
© NIST. Personal use of this material is permitted. The definitive version of this paper was published in TRECVID 2014, 18th International Workshop on Video Retrieval Evaluation, 10-12 November 2014, Orlando, USA and is available at :