This article aims to design an assistive controller for physical Human-Robot Interaction (pHRI) based on Dynamic Cooperative Game Theory (DCGT). In particular, a distributed Model Predictive Control (dMPC) is formulated based on the DCGT principles (GT-dMPC). For proper implementation, one crucial piece of information regards human intention, which is defined as the desired trajectory that a human wants to follow over a finite rolling prediction horizon. To predict the desired human trajectory, a learning model is composed of cascaded Long-Short Term Memory (LSTM) and Fully Connected (FC) layers (RNN + FC). Iterative training and Transfer Learning (TL) techniques are proposed to adapt the model to different users. The behavior of the proposed GT-dMPC framework is thoroughly analyzed with simulations to understand its applicability and the tuning of its parameters for a pHRI assistive controller. Moreover, real-world experiments were carried out on a UR5 robotic arm equipped with a force sensor was installed. First, a brief validation of the RNN + FC model integrated with the GT-dMPC is proposed for the iterative procedure and the TL. Finally, an application scenario is proposed for co-manipulating two objects and comparing the obtained results with other controllers typically used in the pHRI. Results show that the proposed controller reduces the required force of the human in completing tasks, even in the presence of unknown and different loads and inertia. Moreover, the proposed controller allows for precise reaching of the target point and does not introduce any undesirable oscillations. Finally, a subjective questionnaire shows that the proposed controller is, in general, preferred by different users.

Design of an Assistive Controller for Physical Human-Robot Interaction Based on Cooperative Game Theory and Human Intention Estimation

Rocco, P
2024-01-01

Abstract

This article aims to design an assistive controller for physical Human-Robot Interaction (pHRI) based on Dynamic Cooperative Game Theory (DCGT). In particular, a distributed Model Predictive Control (dMPC) is formulated based on the DCGT principles (GT-dMPC). For proper implementation, one crucial piece of information regards human intention, which is defined as the desired trajectory that a human wants to follow over a finite rolling prediction horizon. To predict the desired human trajectory, a learning model is composed of cascaded Long-Short Term Memory (LSTM) and Fully Connected (FC) layers (RNN + FC). Iterative training and Transfer Learning (TL) techniques are proposed to adapt the model to different users. The behavior of the proposed GT-dMPC framework is thoroughly analyzed with simulations to understand its applicability and the tuning of its parameters for a pHRI assistive controller. Moreover, real-world experiments were carried out on a UR5 robotic arm equipped with a force sensor was installed. First, a brief validation of the RNN + FC model integrated with the GT-dMPC is proposed for the iterative procedure and the TL. Finally, an application scenario is proposed for co-manipulating two objects and comparing the obtained results with other controllers typically used in the pHRI. Results show that the proposed controller reduces the required force of the human in completing tasks, even in the presence of unknown and different loads and inertia. Moreover, the proposed controller allows for precise reaching of the target point and does not introduce any undesirable oscillations. Finally, a subjective questionnaire shows that the proposed controller is, in general, preferred by different users.
2024
Physical human-robot interaction
learning human intention
human intention identification
dynamic cooperative game theory
model predictive control
File in questo prodotto:
File Dimensione Formato  
TASE_Franceschi_et_al_2024.pdf

Accesso riservato

: Publisher’s version
Dimensione 2.98 MB
Formato Adobe PDF
2.98 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1280206
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact