A multimodal approach to music transcription

Paleari, Marco;Huet, Benoit;Schutz, Antony;Slock, Dirk T M
ICIP 2008, 1st Workshop on Multimedia Information Retrieval: New Trends and Challenges, October 12-15, 2008, San Diego, USA

Music transcription refers to extraction of a human readable and interpretable description from a recording of a music performance. Automatic music transcription remains, nowadays, a challenging research problem when dealing with polyphonic sounds or when removing certain constraints. Some instruments like guitars and violins add ambiguity to the problem as the same note can be played at different positions. When dealing with guitar music tablature are, often, preferred to the usual music score, as they present information in a more accessible way. Here, we address this issue with a system which uses the visual modality to support traditional audio transcription techniques. The system is composed of four modules which have been implemented and evaluated: a system which tracks the position of the fretboard on a video stream, a system which automatically detects the position of the guitar on the first fret to initialize the first system, a system which detects the position of the hand on the guitar, and finally a system which fuses the visual and audio information to extract a tablature. Results show that this kind of multimodal approach can easily disambiguate 89% of notes in a deterministic way.


DOI
Type:
Conference
City:
San Diego
Date:
2008-10-12
Department:
Data Science
Eurecom Ref:
2491
Copyright:
© 2008 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

PERMALINK : https://www.eurecom.fr/publication/2491