Thomas Steiner - Google Inc. Multimedia Communications
Date: September 13th 2011 Location: Eurecom - Eurecom
Mobile devices like smartphones and digital cameras together with social networks enable people to generate, share, and consume enormous amounts of media content, both en route or at home. Common search operations, for example searching for a music clip based on artist name and song title on video platforms such as YouTube, can be achieved both based on potentially shallow human-generated metadata, or based on more profound content analysis driven by Optical Character Recognition (OCR) or Automatic Speech Recognition (ASR). However, more advanced use cases, such as summaries or compilations of several pieces of media content covering a certain event, are hard, if not impossible to fulfill at large scale. One example of such event can be a keynote speech given at a conference, where, given a stable network connection, media content is published on social networks while the event is still going on. This talk gives an overview on the ongoing work on a framework for media content processing, leveraging social networks, utilizing the Web of Data, and fine-grained media content addressing schemes like Media Fragments URIs to provide a scalable and sophisticated solution to realize the above use cases.