Collaborative robots (cobots) are democratizing industrial automation with their user-friendly programming approaches. Nevertheless, the Blockly-like interfaces typically available on cobots still require the user to define the program logic flow. Recent advancements in robotics research provide the robotic system with the reasoning capabilities given by symbolic artificial intelligence. This way, the cobot can acquire a new skill from a user demonstration, understand its semantics, and use symbolic planning for grounding and sequencing. Such methodologies rely on a symbolic description of the scene that should adequately represent how the cobot’s actions modify the environment. The symbols employed in the literature, however, either lack descriptive accuracy or are too specific for the targeted task, resulting in the application of the proposed teaching methodologies only to simple scenarios. This paper addresses these issues by introducing a methodology for symbolically describing general-purpose spatial relations between entities in a workspace, enhancing the flexibility and the range of application of cobots symbolic reasoning for complex manipulation tasks. The proposed approach involves defining a tunable set of predicates for relative positions and orientations, enabling precise symbolic representations, necessary for real-world tasks. The adoption of these symbols into a Programming by Demonstration framework empowers non-expert users to teach skills and deploy cobots in complex industrial tasks without coding. Experimental results demonstrate the effectiveness of this method, showing that first-time users can deploy cobots for a complex machine tending task comprising parts reorientations.

Symbolic representation of objects relative poses for robotic manipulation tasks

Zappa I.;Zanchettin A. M.;Rocco P.
2026-01-01

Abstract

Collaborative robots (cobots) are democratizing industrial automation with their user-friendly programming approaches. Nevertheless, the Blockly-like interfaces typically available on cobots still require the user to define the program logic flow. Recent advancements in robotics research provide the robotic system with the reasoning capabilities given by symbolic artificial intelligence. This way, the cobot can acquire a new skill from a user demonstration, understand its semantics, and use symbolic planning for grounding and sequencing. Such methodologies rely on a symbolic description of the scene that should adequately represent how the cobot’s actions modify the environment. The symbols employed in the literature, however, either lack descriptive accuracy or are too specific for the targeted task, resulting in the application of the proposed teaching methodologies only to simple scenarios. This paper addresses these issues by introducing a methodology for symbolically describing general-purpose spatial relations between entities in a workspace, enhancing the flexibility and the range of application of cobots symbolic reasoning for complex manipulation tasks. The proposed approach involves defining a tunable set of predicates for relative positions and orientations, enabling precise symbolic representations, necessary for real-world tasks. The adoption of these symbols into a Programming by Demonstration framework empowers non-expert users to teach skills and deploy cobots in complex industrial tasks without coding. Experimental results demonstrate the effectiveness of this method, showing that first-time users can deploy cobots for a complex machine tending task comprising parts reorientations.
2026
Collaborative robotics
Programming by demonstration
Robot skill programming
Symbolic artificial intelligence
Symbolic knowledge representation
File in questo prodotto:
File Dimensione Formato  
EAAI_Zappa_et_al_2026.pdf

accesso aperto

: Publisher’s version
Dimensione 5.24 MB
Formato Adobe PDF
5.24 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1307689
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact