This paper compares several approaches of natural language access to video databases. We present two main strategies. The first one is visual, and consists in comparing keyframes with images retrieved from Google Images. The second one is textual and consists in generating a text-based description of the keyframes, and comparing these descriptions with the query. We study the effect of several parameters and find out that substantial improvement is possible by choosing the right strategy for a given topic. Finally we investigate a method for choosing the right approach for a given topic.
Visual versus textual embedding for video retrieval
ACIVS 2017, Advanced Concepts for Intelligent Vision Systems, September 18-21, 2017, Antwerp, Belgium
© Springer. Personal use of this material is permitted. The definitive version of this paper was published in ACIVS 2017, Advanced Concepts for Intelligent Vision Systems, September 18-21, 2017, Antwerp, Belgium and is available at :
PERMALINK : https://www.eurecom.fr/publication/5319