In industrial settings, robots are typically employed to accurately track a reference force to exert on the surrounding environment to complete interaction tasks. Interaction controllers are typically used to achieve this goal. Still, they either require manual tuning, which demands a significant amount of time, or exact modeling of the environment the robot will interact with, thus possibly failing during the actual application. A significant advancement in this area would be a high-performance force controller that does not need operator calibration and is quick to be deployed in any scenario. With this aim, this paper proposes an Actor-Critic Model Predictive Force Controller (ACMPFC), which outputs the optimal setpoint to follow in order to guarantee force tracking, computed by continuously trained neural networks. This strategy is an extension of a reinforcement learning-based one, born in the context of human-robot collaboration, suitably adapted to robot-environment interaction. We validate the ACMPFC in a real-case scenario featuring a Franka Emika Panda robot. Compared with a base force controller and a learning-based approach, the proposed controller yields a reduction of the force tracking MSE, attaining fast convergence: with respect to the base force controller, ACMPFC reduces the MSE by a factor of 4.35.

Experimental Validation of an Actor-Critic Model Predictive Force Controller for Robot-Environment Interaction Tasks

Braghin F.;
2023-01-01

Abstract

In industrial settings, robots are typically employed to accurately track a reference force to exert on the surrounding environment to complete interaction tasks. Interaction controllers are typically used to achieve this goal. Still, they either require manual tuning, which demands a significant amount of time, or exact modeling of the environment the robot will interact with, thus possibly failing during the actual application. A significant advancement in this area would be a high-performance force controller that does not need operator calibration and is quick to be deployed in any scenario. With this aim, this paper proposes an Actor-Critic Model Predictive Force Controller (ACMPFC), which outputs the optimal setpoint to follow in order to guarantee force tracking, computed by continuously trained neural networks. This strategy is an extension of a reinforcement learning-based one, born in the context of human-robot collaboration, suitably adapted to robot-environment interaction. We validate the ACMPFC in a real-case scenario featuring a Franka Emika Panda robot. Compared with a base force controller and a learning-based approach, the proposed controller yields a reduction of the force tracking MSE, attaining fast convergence: with respect to the base force controller, ACMPFC reduces the MSE by a factor of 4.35.
2023
Proceedings of the International Conference on Informatics in Control, Automation and Robotics
Artificial Neural Networks
Impedance Control
Optimized Interaction Control
Physical Robot-Environment Interaction
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1263194
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact