An electrocardiogram (ECG) classifier for the detection of ECG segments containing atrial or ventricular (A/V) beats could ease in the detection of premature atrial complexes (PACs) and by so, the study of their relationship with atrial fibrillation (AF) and stroke. In this work such a classifier is presented based on convolutional neural networks (CNN) and the RR and dRR interval representation on Poincaré Images. Two PhysioNet open-source databases containing beat annotations were used. ECG signals were divided into 30-beat segments with a 50% overlap. Each segment was then transformed into a Poincaré Image. A total of 381151 and 62142 Poincaré Images were computed for normal (N) and A/V segments. RR, dRR and both types of Poincaré Images combined were evaluated as inputs to the CNN. The CNN was trained following a patient-wise train-test division (i.e., no patient was included both in the train and test set) in a 10-fold cross-validation. The patient-wise median and interquartile range accuracy, sensitivity and positive predictive values were 97.90 (94.49 - 99.28), 96.03 (89.67 - 98.76) and 91.91 (70.87 - 99.24), respectively for RR input. No statistical significant differences in performance were found among the three types of Poincaré Images input. Results suggest the present methodology manages to distinguish among N and A/V with high precision.
A Poincaré Image-Based Detector of ECG Segments Containing Atrial and Ventricular Beats
Garcia Isla G.;Mainardi L.;Corino V.
2021-01-01
Abstract
An electrocardiogram (ECG) classifier for the detection of ECG segments containing atrial or ventricular (A/V) beats could ease in the detection of premature atrial complexes (PACs) and by so, the study of their relationship with atrial fibrillation (AF) and stroke. In this work such a classifier is presented based on convolutional neural networks (CNN) and the RR and dRR interval representation on Poincaré Images. Two PhysioNet open-source databases containing beat annotations were used. ECG signals were divided into 30-beat segments with a 50% overlap. Each segment was then transformed into a Poincaré Image. A total of 381151 and 62142 Poincaré Images were computed for normal (N) and A/V segments. RR, dRR and both types of Poincaré Images combined were evaluated as inputs to the CNN. The CNN was trained following a patient-wise train-test division (i.e., no patient was included both in the train and test set) in a 10-fold cross-validation. The patient-wise median and interquartile range accuracy, sensitivity and positive predictive values were 97.90 (94.49 - 99.28), 96.03 (89.67 - 98.76) and 91.91 (70.87 - 99.24), respectively for RR input. No statistical significant differences in performance were found among the three types of Poincaré Images input. Results suggest the present methodology manages to distinguish among N and A/V with high precision.File | Dimensione | Formato | |
---|---|---|---|
Lupe_CinC_2021.pdf
accesso aperto
:
Publisher’s version
Dimensione
233.49 kB
Formato
Adobe PDF
|
233.49 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.