Muscle computer Interface (muCI), one of the widespread human-computer interfaces, has been widely adopted for the identification of hand gestures by using the electrical activity of muscles. Although multi-modal theory and machine learning algorithms have made enormous progress in muCI over the last decades, the processing of the collecting and labeling large data sets creates a high workload and leads to time-consuming implementations. In this paper, a novel muCI was developed to integrate the advantages of EMG signals and depth vision, which could be used to automatically label the cluster of EMG data collected using depth vision. A three layers hierarchical k-medoids approach was designed to extract and label the clustering feature of ten hand gestures. A multi-class linear discriminant analysis algorithm was applied to build the hand gesture classifier. The results showed that the proposed algorithm had high accuracy and the muCI performed well, which could automatically label the hand gesture in all experiments. The proposed muCI can be utilized for hand gesture recognition without labeling the data in advance and has the potential for robot manipulation and virtual reality applications.

A novel muscle-computer interface for hand gesture recognition using depth vision

Qi W.;Ovur S. E.;Su H.;Ferrigno G.;De Momi E.
2020-01-01

Abstract

Muscle computer Interface (muCI), one of the widespread human-computer interfaces, has been widely adopted for the identification of hand gestures by using the electrical activity of muscles. Although multi-modal theory and machine learning algorithms have made enormous progress in muCI over the last decades, the processing of the collecting and labeling large data sets creates a high workload and leads to time-consuming implementations. In this paper, a novel muCI was developed to integrate the advantages of EMG signals and depth vision, which could be used to automatically label the cluster of EMG data collected using depth vision. A three layers hierarchical k-medoids approach was designed to extract and label the clustering feature of ten hand gestures. A multi-class linear discriminant analysis algorithm was applied to build the hand gesture classifier. The results showed that the proposed algorithm had high accuracy and the muCI performed well, which could automatically label the hand gesture in all experiments. The proposed muCI can be utilized for hand gesture recognition without labeling the data in advance and has the potential for robot manipulation and virtual reality applications.
2020
Classification
Clustering
Depth vision
Hand gesture recognition
Muscle computer interface
File in questo prodotto:
File Dimensione Formato  
Zhou2020_Article_ANovelMuscle-computerInterface (1).pdf

Accesso riservato

: Publisher’s version
Dimensione 5.23 MB
Formato Adobe PDF
5.23 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1146600
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 23
  • ???jsp.display-item.citation.isi??? 18
social impact