The recycling of end-of-life (EOL) products poses significant challenges due to inefficient and unsafe disassembly processes. To address this, we propose a novel self-perception human-robot collaboration (HRC) system that enhances disassembly efficiency and safety through multi-modal human intention recognition. Our core methodological innovation lies in the real-time fusion of three key perception modules: action recognition using a Spatial-Temporal Graph Convolutional Network (ST-GCN), disassembly tool detection based on an enhanced YOLO algorithm, and facial angle recognition for operator awareness inference. A dedicated dataset for retired power battery disassembly was constructed to support this research, encompassing human skeletal data for action recognition, labeled images for tool detection, and facial expression detection. The proposed system was validated on a physical HRC disassembly platform. Experimental results demonstrate a marked improvement, with our integrated intention recognition method achieving an accuracy of approximately 85 %, significantly outperforming traditional single-modality approaches. Furthermore, the HRC disassembly operation was completed in 238 s, which is 60 s (or 20 %) faster than purely manual disassembly. This work provides a robust and efficient HRC disassembly framework for intelligent disassembly scenario understanding, contributing to advancing circular manufacturing.

Intelligent disassembly scenario understanding for human behavior and intention recognition towards self-perception human-robot collaboration system

Xiao, J.;Terzi, S.;Macchi, M.
2025-01-01

Abstract

The recycling of end-of-life (EOL) products poses significant challenges due to inefficient and unsafe disassembly processes. To address this, we propose a novel self-perception human-robot collaboration (HRC) system that enhances disassembly efficiency and safety through multi-modal human intention recognition. Our core methodological innovation lies in the real-time fusion of three key perception modules: action recognition using a Spatial-Temporal Graph Convolutional Network (ST-GCN), disassembly tool detection based on an enhanced YOLO algorithm, and facial angle recognition for operator awareness inference. A dedicated dataset for retired power battery disassembly was constructed to support this research, encompassing human skeletal data for action recognition, labeled images for tool detection, and facial expression detection. The proposed system was validated on a physical HRC disassembly platform. Experimental results demonstrate a marked improvement, with our integrated intention recognition method achieving an accuracy of approximately 85 %, significantly outperforming traditional single-modality approaches. Furthermore, the HRC disassembly operation was completed in 238 s, which is 60 s (or 20 %) faster than purely manual disassembly. This work provides a robust and efficient HRC disassembly framework for intelligent disassembly scenario understanding, contributing to advancing circular manufacturing.
2025
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S0278612525002730-main.pdf

accesso aperto

: Publisher’s version
Dimensione 22.02 MB
Formato Adobe PDF
22.02 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1300511
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 5
social impact