Background: The rise in life expectancy is associated with an increase in long-term and gradual cognitive decline. Treatment effectiveness is enhanced at the early stage of the disease. Therefore, there is a need to find low-cost and ecological solutions for mass screening of community-dwelling older adults. Objective: This work aims to exploit automatic analysis of free speech to identify signs of cognitive function decline. Methods: A sample of 266 participants older than 65 years were recruited in Italy and Spain and were divided into 3 groups according to their Mini-Mental Status Examination (MMSE) scores. People were asked to tell a story and describe a picture, and voice recordings were used to extract high-level features on different time scales automatically. Based on these features, machine learning algorithms were trained to solve binary and multiclass classification problems by using both mono- and cross-lingual approaches. The algorithms were enriched using Shapley Additive Explanations for model explainability. Results: In the Italian data set, healthy participants (MMSE score >= 27) were automatically discriminated from participants with mildly impaired cognitive function (20 <= MMSE score <= 26) and from those with moderate to severe impairment of cognitive function (11 <= MMSE score <= 19) with accuracy of 80% and 86%, respectively. Slightly lower performance was achieved in the Spanish and multilanguage data sets. Conclusions: This work proposes a transparent and unobtrusive assessment method, which might be included in a mobile app for large-scale monitoring of cognitive functionality in older adults. Voice is confirmed to be an important biomarker of cognitive decline due to its noninvasive and easily accessible nature.

Automatic Spontaneous Speech Analysis for the Detection of Cognitive Functional Decline in Older Adults: Multilanguage Cross-Sectional Study

Ambrosini, Emilia;Giangregorio, Chiara;Lomurno, Eugenio;Moccia, Sara;Matteucci, Matteo;Ferrante, Simona
2024-01-01

Abstract

Background: The rise in life expectancy is associated with an increase in long-term and gradual cognitive decline. Treatment effectiveness is enhanced at the early stage of the disease. Therefore, there is a need to find low-cost and ecological solutions for mass screening of community-dwelling older adults. Objective: This work aims to exploit automatic analysis of free speech to identify signs of cognitive function decline. Methods: A sample of 266 participants older than 65 years were recruited in Italy and Spain and were divided into 3 groups according to their Mini-Mental Status Examination (MMSE) scores. People were asked to tell a story and describe a picture, and voice recordings were used to extract high-level features on different time scales automatically. Based on these features, machine learning algorithms were trained to solve binary and multiclass classification problems by using both mono- and cross-lingual approaches. The algorithms were enriched using Shapley Additive Explanations for model explainability. Results: In the Italian data set, healthy participants (MMSE score >= 27) were automatically discriminated from participants with mildly impaired cognitive function (20 <= MMSE score <= 26) and from those with moderate to severe impairment of cognitive function (11 <= MMSE score <= 19) with accuracy of 80% and 86%, respectively. Slightly lower performance was achieved in the Spanish and multilanguage data sets. Conclusions: This work proposes a transparent and unobtrusive assessment method, which might be included in a mobile app for large-scale monitoring of cognitive functionality in older adults. Voice is confirmed to be an important biomarker of cognitive decline due to its noninvasive and easily accessible nature.
2024
Mini-Mental Status Examination
cognitive decline
machine learning
multilanguage
speech processing
File in questo prodotto:
File Dimensione Formato  
Ambrosini_JMIR_2024.pdf

accesso aperto

Dimensione 807.61 kB
Formato Adobe PDF
807.61 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1276845
Citazioni
  • ???jsp.display-item.citation.pmc??? 0
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact