DISTBIC : A speaker-based segmentation for audio data indexing

Delacourt, Perrine;Wellekens, Christian J
Speech Communication, Volume 32, N°1-2, 2000

In this paper, we address the problem of speaker-based segmentation, which is the (R)rst necessary step for several
indexing tasks. It aims to extract homogeneous segments containing the longest possible utterances produced by a single speaker. In our context, no assumption is made about prior knowledge of the speaker or speech signal characteristics (neither speaker model, nor speech model). However, we assume that people do not speak simultaneously
and that we have no real-time constraints. We review existing techniques and propose a new segmentation method, which combines two different segmentation techniques. This method, called DISTBIC, is organized into two passes: First the most likely speaker turns are detected, and then they are validated or discarded. The advantage of our algorithm is its e**ciency in detecting speaker turns even close to one another (i.e., separated by a few seconds).


DOI
Type:
Journal
Date:
2000-09-01
Department:
Sécurité numérique
Eurecom Ref:
564
Copyright:
© Elsevier. Personal use of this material is permitted. The definitive version of this paper was published in Speech Communication, Volume 32, N°1-2, 2000 and is available at : http://dx.doi.org/10.1016/S0167-6393(00)00027-3

PERMALINK : https://www.eurecom.fr/publication/564