PANACEA cough sound-based diagnosis of COVID-19 for the DiCOVA 2021 Challenge

Kamble; Madhu R; Gonzalez-Lopez, Jose A; Grau, Teresa; Espin, Juan M; Cascioli, Lorenzo; Huang, Yiqing; Gomez-Alanis, Alejandro; Patino, José; Font, Roberto; Peinado, Antonio M; Gomez, Angel M; Evans, Nicholas; Zuluaga, Maria A; Todisco, Massimiliano
INTERSPEECH 2021, DiCOVA 2021 Challenge, Diagnosing COVID-19 using Acoustics, 30 August-3 September 2021, Brno, Czechia (Virtual Conference)

The COVID-19 pandemic has led to the saturation of public health services worldwide. In this scenario, the early diagnosis of SARS-Cov-2 infections can help to stop or slow the spread of the virus and to manage the demand upon health services. This is especially important when resources are also being stretched by heightened demand linked to other seasonal diseases, such as the flu. In this context, the organisers of the DiCOVA 2021 challenge have collected a database with the aim of diagnosing COVID-19 through the use of coughing audio samples. This work presents the details of the automatic system for COVID-19 detection from cough recordings presented by team PANACEA. This team consists of researchers from two European academic institutions and one company: EURECOM (France), University of Granada (Spain), and Biometric Vox S.L. (Spain). We developed several systems based on established signal processing and machine learning methods. Our best system employs a Teager energy operator cepstral coefficients (TECCs) based frontend and Light gradient boosting machine (LightGBM) backend. The AUC obtained by this system on the test set is 76.31% which corresponds to a 10% improvement over the official baseline.


DOI
HAL
Type:
Conférence
City:
Brno
Date:
2021-08-30
Department:
Sécurité numérique
Eurecom Ref:
6582
Copyright:
© ISCA. Personal use of this material is permitted. The definitive version of this paper was published in INTERSPEECH 2021, DiCOVA 2021 Challenge, Diagnosing COVID-19 using Acoustics, 30 August-3 September 2021, Brno, Czechia (Virtual Conference) and is available at : http://dx.doi.org/10.21437/Interspeech.2021-1062

PERMALINK : https://www.eurecom.fr/publication/6582