The accurate detection of physiologically-related events in photopletismographic (PPG) and phonocardiographic (PCG) signals, recorded by wearable sensors, is mandatory to perform the estimation of relevant cardiovascular parameters like the heart rate and the blood pressure. However, the measurement performed in uncontrolled conditions without clinical supervision leaves the detection quality particularly susceptible to noise and motion artifacts. This work proposes a new fully-automatic computational framework, based on convolutional networks, to identify and localize fiducial points in time as the foot, maximum slope and peak in PPG signal and the S1 sound in the PCG signal, both acquired by a custom chest sensor, described recently in the literature by our group. The event detection problem was reframed as a single hybrid regression-classification problem entailing a custom neural architecture to process sequentially the PPG and PCG signals. Tests were performed analysing four different acquisition conditions (rest, cycling, rest recovery and walking). Cross-validation results for the three PPG fiducial points showed identification accuracy greater than 93 % and localization error (RMSE) less than 10 ms. As expected, cycling and walking conditions provided worse results than rest and recovery, however reaching an accuracy greater than 90 % and a localization error less than 15 ms. Likewise, the identification and localization error for S1 sound were greater than 90 % and less than 25 ms. Overall, this study showcased the ability of the proposed technique to detect events with high accuracy not only for steady acquisitions but also during subject movements. We also showed that the proposed network outperformed traditional Shannon-energy-envelope method in the detection of S1 sound, reaching detection performance comparable to state of the art algorithms. Therefore, we argue that coupling chest sensors and deep learning processing techniques may disclose wearable devices to unobtrusively acquire health information, being less affected by noise and motion artifacts.

Hybrid Convolutional Networks for End-to-End Event Detection in Concurrent PPG and PCG Signals Affected by Motion Artifacts

Marzorati, Davide;Mainardi, Luca;Cerveri, Pietro
2022-01-01

Abstract

The accurate detection of physiologically-related events in photopletismographic (PPG) and phonocardiographic (PCG) signals, recorded by wearable sensors, is mandatory to perform the estimation of relevant cardiovascular parameters like the heart rate and the blood pressure. However, the measurement performed in uncontrolled conditions without clinical supervision leaves the detection quality particularly susceptible to noise and motion artifacts. This work proposes a new fully-automatic computational framework, based on convolutional networks, to identify and localize fiducial points in time as the foot, maximum slope and peak in PPG signal and the S1 sound in the PCG signal, both acquired by a custom chest sensor, described recently in the literature by our group. The event detection problem was reframed as a single hybrid regression-classification problem entailing a custom neural architecture to process sequentially the PPG and PCG signals. Tests were performed analysing four different acquisition conditions (rest, cycling, rest recovery and walking). Cross-validation results for the three PPG fiducial points showed identification accuracy greater than 93 % and localization error (RMSE) less than 10 ms. As expected, cycling and walking conditions provided worse results than rest and recovery, however reaching an accuracy greater than 90 % and a localization error less than 15 ms. Likewise, the identification and localization error for S1 sound were greater than 90 % and less than 25 ms. Overall, this study showcased the ability of the proposed technique to detect events with high accuracy not only for steady acquisitions but also during subject movements. We also showed that the proposed network outperformed traditional Shannon-energy-envelope method in the detection of S1 sound, reaching detection performance comparable to state of the art algorithms. Therefore, we argue that coupling chest sensors and deep learning processing techniques may disclose wearable devices to unobtrusively acquire health information, being less affected by noise and motion artifacts.
2022
Phonocardiography
Heart rate
Biomedical monitoring
Hidden Markov models
Motion artifacts
Computational modeling
Timing
Deep convolutional networks
heart sounds
photoplethismography
phonocardiography
pulse arrival time
wearable sensors
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1233377
Citazioni
  • ???jsp.display-item.citation.pmc??? 1
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 4
social impact