MM Talk Acoustic Context Recognition

Daniele Battaglino - NXP Software, Sophia Anitpolis, EURECOM CIFRE PhD student
Multimedia Communications

Date: -
Location: Eurecom

Title: Acoustic Context Recognition Imagine closing your eyes for a moment and listening carefully to the sounds around you. You may recognize things like footsteps, the fan running in the room, cars passing, voices and other noises. Even in the absence of visual cues, humans can predict events and sounds and create a description of the acoustic environment. These acoustic cues provide information of objects which are not within the listeners' field of vision. My current research is focused on the automatic recognition of the acoustic scene from a machine-learning point of view. The contextual information is particularly relevant in a scenario where devices and machines are always with us: devices may be able to switch behaviour depending on the context, tracking diff erent activities, helping people with hearing problems. This is a di cult task for both humans and machines and therefore it requires the study of new features; features specifi cally designed for speech and music are not optimal for this task. Another unsolved issue is how to adapt the recognition from a closed-set of possible predictions to an open-set, in order to discard the sounds which have not been seen before. These challenges makes acoustic scene classification an interesting task from both research and application point of view.