Human-to-robot knowledge transfer plays a pivotal role in letting robotics adapt to a wide range of Tasks. This transfer should provide the robot with a set of building blocks representing both the semantic knowledge of the Task and the sequence of position/force primitives to achieve it. Furthermore, aiming to provide the robot with autonomous decision making and handling of unstructured workspaces, it should schedule the best course of action, even in case of unexpected events. This work proposes a framework that exploits Programming by Demonstration (PbD) granting human to robot Task knowledge transfer, resulting in semantic Behaviour Trees (BTs). These encapsulate the Skills’ semantics through Predicates and their execution through position and force primitives. Two experiments and an industrial-like use case show how the framework, together with a knowledge base and a PDDL planner, provides different industrial robot brands autonomy in decision making, flexibility to different Tasks and reactiveness to unexpected events. Note to Practitioners—In modern industrial automation, robots are expected to handle diverse Tasks with minimal reprogramming. Traditional automation relies on predefined sequences, limiting adaptability when Tasks change or unexpected events happen. This is especially problematic in industries like manufacturing, logistics, and assembly, where variability is common. This work enables robots to learn from human demonstrations and autonomously adapt to new situations. Indeed, it captures the Task’s objective and all its execution details, enhancing flexibility and responsiveness during production. The framework exploits Task’s demonstrations instead of manual coding, a mathematical model to guide the robot throughout the demonstrated Task, and AI to plan each step while adapting to unexpected changes. This methodology reduces programming time while being independent of the robot’s brand. Key contributions include bridging human knowledge with robotic execution through Task demonstrations and organising them in a step-by-step guide based on conditions to handle possible unexpected events. AI constructs such a guide and handles position and force-based Tasks. However, limitations exist, such as the need for full information about its surroundings (high-quality sensor data) and a fixed set of rules to be known beforehand. Future research could focus on extending the approach to multi-robot systems and removing or simplifying the creation of the required fixed set of rules. Beyond industrial automation, this method has potential applications in humanoid and medical robotics, where adaptability and learning from human demonstrations are crucial. This framework significantly enhances automation across industries by making robots more intuitive and adaptive.

A Robot-Agnostic Framework to Learn Position-Force Controlled Robotic Applications

Lucci N.;Montini E.;Zappa I.;Zanchettin A. M.;Rocco P.
2026-01-01

Abstract

Human-to-robot knowledge transfer plays a pivotal role in letting robotics adapt to a wide range of Tasks. This transfer should provide the robot with a set of building blocks representing both the semantic knowledge of the Task and the sequence of position/force primitives to achieve it. Furthermore, aiming to provide the robot with autonomous decision making and handling of unstructured workspaces, it should schedule the best course of action, even in case of unexpected events. This work proposes a framework that exploits Programming by Demonstration (PbD) granting human to robot Task knowledge transfer, resulting in semantic Behaviour Trees (BTs). These encapsulate the Skills’ semantics through Predicates and their execution through position and force primitives. Two experiments and an industrial-like use case show how the framework, together with a knowledge base and a PDDL planner, provides different industrial robot brands autonomy in decision making, flexibility to different Tasks and reactiveness to unexpected events. Note to Practitioners—In modern industrial automation, robots are expected to handle diverse Tasks with minimal reprogramming. Traditional automation relies on predefined sequences, limiting adaptability when Tasks change or unexpected events happen. This is especially problematic in industries like manufacturing, logistics, and assembly, where variability is common. This work enables robots to learn from human demonstrations and autonomously adapt to new situations. Indeed, it captures the Task’s objective and all its execution details, enhancing flexibility and responsiveness during production. The framework exploits Task’s demonstrations instead of manual coding, a mathematical model to guide the robot throughout the demonstrated Task, and AI to plan each step while adapting to unexpected changes. This methodology reduces programming time while being independent of the robot’s brand. Key contributions include bridging human knowledge with robotic execution through Task demonstrations and organising them in a step-by-step guide based on conditions to handle possible unexpected events. AI constructs such a guide and handles position and force-based Tasks. However, limitations exist, such as the need for full information about its surroundings (high-quality sensor data) and a fixed set of rules to be known beforehand. Future research could focus on extending the approach to multi-robot systems and removing or simplifying the creation of the required fixed set of rules. Beyond industrial automation, this method has potential applications in humanoid and medical robotics, where adaptability and learning from human demonstrations are crucial. This framework significantly enhances automation across industries by making robots more intuitive and adaptive.
2026
Behaviour-based systems
collaborative robots in manufacturing
failure detection and recovery
File in questo prodotto:
File Dimensione Formato  
TASE_Fratini_et_al_2026.pdf

Accesso riservato

: Publisher’s version
Dimensione 4.31 MB
Formato Adobe PDF
4.31 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1307686
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact