Finger gesture recognition using surface electromyography (sEMG) became an efficient Human-Robot Interaction (HRI) solution. Although Machine Learning (ML) techniques are widely applied in this field, the general solutions for labeling and collecting big datasets impose time-consuming implementation and heavy workloads. In this paper, a new deep learning structure, namely three-dimensional convolutional long short-term memory neural networks (3D-CLDNN) for finger gesture identification based on depth vision and sEMG signals, was proposed for human-machine interaction. It automatically labels the depth data by the self-organizing map (SOM) and predicts the hand gesture only adopting sEMG signals. The 3D-CLDNN method is integrated to improve the recognition rate and computational speed. The results showed the highest clustering accuracy (98.60%) and highest accuracy (84.40%) with the lowest computational time compared with different approaches. Finally, real-time human-machine interaction experiments are performed to demonstrate its efficiency.

A 3D-CLDNN Based Multiple Data Fusion Framework for Finger Gesture Recognition in Human-Robot Interaction

Qi W.;Aliverti A.
2022-01-01

Abstract

Finger gesture recognition using surface electromyography (sEMG) became an efficient Human-Robot Interaction (HRI) solution. Although Machine Learning (ML) techniques are widely applied in this field, the general solutions for labeling and collecting big datasets impose time-consuming implementation and heavy workloads. In this paper, a new deep learning structure, namely three-dimensional convolutional long short-term memory neural networks (3D-CLDNN) for finger gesture identification based on depth vision and sEMG signals, was proposed for human-machine interaction. It automatically labels the depth data by the self-organizing map (SOM) and predicts the hand gesture only adopting sEMG signals. The 3D-CLDNN method is integrated to improve the recognition rate and computational speed. The results showed the highest clustering accuracy (98.60%) and highest accuracy (84.40%) with the lowest computational time compared with different approaches. Finally, real-time human-machine interaction experiments are performed to demonstrate its efficiency.
2022
2022 4th International Conference on Control and Robotics, ICCR 2022
978-1-6654-8641-5
Deep learning
Human-robot interaction
Multimodal data fusion
Signal processing
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1233802
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 1
social impact