Purpose: The aim of this study was to analyze the performance of multivariate machine learning (ML) models applied to a speech-in-noise hearing screening test and investigate the contribution of the measured features toward hearing loss detection using explainability techniques.Method: Seven different ML techniques, including transparent (i.e., decision tree and logistic regression) and opaque (e.g., random forest) models, were trained and evaluated on a data set including 215 tested ears (99 with hearing loss of mild degree or higher and 116 with no hearing loss). Post hoc explainability techniques were applied to highlight the role of each feature in predicting hearing loss.Results: Random forest (accuracy =.85, sensitivity =.86, specificity =.85, precision =.84) performed, on average, better than decision tree (accuracy =.82, sensitivity =.84, specificity =.80, precision =.79). Support vector machine, logistic regression, and gradient boosting had similar performance as random forest. According to post hoc explainability analysis on models generated using random forest, the features with the highest relevance in predicting hearing loss were age, number and percentage of correct responses, and average reaction time, whereas the total test time had the lowest relevance.Conclusions: This study demonstrates that a multivariate approach can help detect hearing loss with satisfactory performance. Further research on a bigger sample and using more complex ML algorithms and explainability techniques is needed to fully investigate the role of input features (including additional features such as risk factors and individual responses to low-/high-frequency stimuli) in predicting hearing loss.

Evaluation of Machine Learning Algorithms and Explainability Techniques to Detect Hearing Loss From a Speech-in-Noise Screening Test

Lenatti, Marta;Polo, Edoardo M;Mollura, Maximiliano;Barbieri, Riccardo;Paglialonga, Alessia
2022-01-01

Abstract

Purpose: The aim of this study was to analyze the performance of multivariate machine learning (ML) models applied to a speech-in-noise hearing screening test and investigate the contribution of the measured features toward hearing loss detection using explainability techniques.Method: Seven different ML techniques, including transparent (i.e., decision tree and logistic regression) and opaque (e.g., random forest) models, were trained and evaluated on a data set including 215 tested ears (99 with hearing loss of mild degree or higher and 116 with no hearing loss). Post hoc explainability techniques were applied to highlight the role of each feature in predicting hearing loss.Results: Random forest (accuracy =.85, sensitivity =.86, specificity =.85, precision =.84) performed, on average, better than decision tree (accuracy =.82, sensitivity =.84, specificity =.80, precision =.79). Support vector machine, logistic regression, and gradient boosting had similar performance as random forest. According to post hoc explainability analysis on models generated using random forest, the features with the highest relevance in predicting hearing loss were age, number and percentage of correct responses, and average reaction time, whereas the total test time had the lowest relevance.Conclusions: This study demonstrates that a multivariate approach can help detect hearing loss with satisfactory performance. Further research on a bigger sample and using more complex ML algorithms and explainability techniques is needed to fully investigate the role of input features (including additional features such as risk factors and individual responses to low-/high-frequency stimuli) in predicting hearing loss.
2022
File in questo prodotto:
File Dimensione Formato  
Lenatti_ML-SNT_AJA_2022_published-version.pdf

accesso aperto

: Publisher’s version
Dimensione 798.7 kB
Formato Adobe PDF
798.7 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1233530
Citazioni
  • ???jsp.display-item.citation.pmc??? 3
  • Scopus 11
  • ???jsp.display-item.citation.isi??? 8
social impact