This paper proposes a new framework for embedding an Intelligent Digital Twin (DT) in a production system with the objective of achieving more efficient real-time production planning and control. For that purpose, the Intelligence Layer is based on Reinforcement Leaning (RL) and Deep RL (DRL) algorithms. The use of this control instead of parametric simulation-based optimization approach allows to benefit from the separation between the training and execution phase. To ensure consistency and reusability, this work presents a standardized framework, based on a formal methodology, that specifies how the various components of the DT-based RL architecture interact over time to achieve essential real-time concurrency and synchronization aspects. Experiments are conducted in a small-scale production system where material handling operations are performed by an Autonomous Mobile Robot (AMR) in an Industry 4.0 Laboratory. Results showed how synchronized state updates between the Physical and Cyber World are used within the Decision Layer to ensure real-time response for the AMR dispatching requests. Finally, to deal with continuous and high-dimensional state spaces, the Deep Q-Network is implemented. The findings of an extensive computational study reveal that implementing the DT-based DRL solution leads to improved efficiency and robustness when compared to conventional dispatching rules.

Digital twin-based reinforcement learning framework: application to autonomous mobile robot dispatching

Negri E.
2024-01-01

Abstract

This paper proposes a new framework for embedding an Intelligent Digital Twin (DT) in a production system with the objective of achieving more efficient real-time production planning and control. For that purpose, the Intelligence Layer is based on Reinforcement Leaning (RL) and Deep RL (DRL) algorithms. The use of this control instead of parametric simulation-based optimization approach allows to benefit from the separation between the training and execution phase. To ensure consistency and reusability, this work presents a standardized framework, based on a formal methodology, that specifies how the various components of the DT-based RL architecture interact over time to achieve essential real-time concurrency and synchronization aspects. Experiments are conducted in a small-scale production system where material handling operations are performed by an Autonomous Mobile Robot (AMR) in an Industry 4.0 Laboratory. Results showed how synchronized state updates between the Physical and Cyber World are used within the Decision Layer to ensure real-time response for the AMR dispatching requests. Finally, to deal with continuous and high-dimensional state spaces, the Deep Q-Network is implemented. The findings of an extensive computational study reveal that implementing the DT-based DRL solution leads to improved efficiency and robustness when compared to conventional dispatching rules.
2024
autonomous mobile robot
deep Q-Network, real-time
Digital twin
dispatching
reinforcement learning
File in questo prodotto:
File Dimensione Formato  
Digital twin-based reinforcement learning framework application to autonomous mobile robot dispatching.pdf

Accesso riservato

Descrizione: Testo articolo
: Publisher’s version
Dimensione 6.66 MB
Formato Adobe PDF
6.66 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1262488
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact