Exploring auditory acoustic features for the diagnosis of the Covid-19

Kamble, Madhu R.; Patino, Jose; Zuluaga, Maria A.; Todisco, Massimiliano
ICASSP 2022, IEEE International Conference on Acoustics, Speech and Signal Processing, 22-27 May 2022, Singapore, Singapore

The current outbreak of a coronavirus, has quickly escalated to become a serious global problem that has now been declared a Public Health Emergency of International Concern by the World Health Organization. Infectious diseases know no borders, so when it comes to controlling outbreaks, timing is absolutely essential. It is so important to detect threats as early as possible, before they spread. After a first successful DiCOVA challenge, the organisers released second DiCOVA challenge with the aim of diagnosing COVID-19 through the use of breath, cough and speech audio samples. This work presents the details of the automatic system for COVID-19 detection using breath, cough and speech recordings. We developed different front-end auditory acoustic features along with a bidirectional Long Short-Term Memory (bi-LSTM) as classifier. The results are promising and have demonstrated the high complementary behaviour among the auditory acoustic features in the Breathing, Cough and Speech tracks giving an AUC of 86.60% on the test set.

 
 

DOI
HAL
Type:
Conference
City:
Singapore
Date:
2022-05-22
Department:
Digital Security
Eurecom Ref:
6798
Copyright:
© 2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

PERMALINK : https://www.eurecom.fr/publication/6798