Graduate School and Research Center in Digital Sciences

User variance and its impact on video retrieval benchmarking

Wilkins, Peter;Troncy, Raphaël;Halvey, Martin;Byrne, Daragh;Amin, Alia;Punitha, P.;Smeaton, Alan F.;Villa, Robert

CIVR 2009, 8th ACM International Conference on Image and Video Retrieval, July 8-10, 2009, Santorini Island, Greece

 In this paper, we describe one of the largest multi-site interactive video retrieval experiments conducted in a laboratory setting. Interactive video retrieval performance is difficult to cross-compare as variables exist across users, interfaces and the underlying retrieval engine. Conducted within the framework of TRECVID 2008, we completed a multi-site, multi-interface experiment. Three institutes participated involving 36 users, 12 each from Dublin City University (DCU, Ireland), University of Glasgow (GU, Scotland) and Centrum Wiskunde & Informatica (CWI, the Netherlands). Three user interfaces were developed which all used the same search service. Using a latin squares arrangement, each user completed 12 topics, leading to 6 TRECVID runs per site, 18 in total. This allowed us to isolate the factors of users and interfaces from retrieval performance. In this paper we present an analysis of both the quantitative and qualitative data generated from this experiment, demonstrating that for interactive video retrieval with "novice" users, performance can vary by up to 300% for the same system using different sets of users, whilst differences in performance of interface variants was in comparison not statistically different. Our results have implications for the manner in which interactive video retrieval experiments using non-expert users are evaluated. The primary focus of this paper is in highlighting that non-expert users generate very large performance fluctuations, which may either mask or create system variability. The discussion of why this happened is not covered by this paper.

Document Doi Bibtex

Title:User variance and its impact on video retrieval benchmarking
Keywords:TRECVID, CBMIR, Video Retrieval, User Study
Type:Conference
Language:English
City:Santorini Island
Country:GREECE
Date:
Department:Data Science
Eurecom ref:2941
Copyright: © ACM, 2009. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in CIVR 2009, 8th ACM International Conference on Image and Video Retrieval, July 8-10, 2009, Santorini Island, Greece http://dx.doi.org/10.1145/1646396.1646400
Bibtex: @inproceedings{EURECOM+2941, doi = {http://dx.doi.org/10.1145/1646396.1646400}, year = {2009}, title = {{U}ser variance and its impact on video retrieval benchmarking}, author = {{W}ilkins, {P}eter and {T}roncy, {R}apha{\"e}l and {H}alvey, {M}artin and {B}yrne, {D}aragh and {A}min, {A}lia and {P}unitha, {P}. and {S}meaton, {A}lan {F}. and {V}illa, {R}obert }, booktitle = {{CIVR} 2009, 8th {ACM} {I}nternational {C}onference on {I}mage and {V}ideo {R}etrieval, {J}uly 8-10, 2009, {S}antorini {I}sland, {G}reece}, address = {{S}antorini {I}sland, {GREECE}}, month = {07}, url = {http://www.eurecom.fr/publication/2941} }
See also: