There is an increasing interest in Reinforcement Learning to solve new and more challenging problems, as those emerging in robotics and unmanned autonomous vehicles. To face these complex systems, a hierarchical and multi-scale representation is crucial. This has brought the interest on Hierarchical Deep Reinforcement learning systems. Despite their successful application, Deep Reinforcement Learning systems suffer from a variety of drawbacks: they are data hungry, they lack of interpretability, and it is difficult to derive theoretical properties about their behavior. Classical Hierarchical Reinforcement Learning approaches, while not suffering from these drawbacks, are often suited for finite actions, and finite states, only. Furthermore, in most of the works, there is no systematic way to represent domain knowledge, which is often only embedded in the reward function.We present a novel Hierarchical Reinforcement Learning framework based on the hierarchical design approach typical of control theory. We developed our framework extending the block diagram representation of control systems to fit the needs of a Hierarchical Reinforcement Learning scenario, thus giving the possibility to integrate domain knowledge in an effective hierarchical architecture.

Graph-Based Design of Hierarchical Reinforcement Learning Agents

Tateo D.;Erdenlig I. S.;Bonarini A.
2020-01-01

Abstract

There is an increasing interest in Reinforcement Learning to solve new and more challenging problems, as those emerging in robotics and unmanned autonomous vehicles. To face these complex systems, a hierarchical and multi-scale representation is crucial. This has brought the interest on Hierarchical Deep Reinforcement learning systems. Despite their successful application, Deep Reinforcement Learning systems suffer from a variety of drawbacks: they are data hungry, they lack of interpretability, and it is difficult to derive theoretical properties about their behavior. Classical Hierarchical Reinforcement Learning approaches, while not suffering from these drawbacks, are often suited for finite actions, and finite states, only. Furthermore, in most of the works, there is no systematic way to represent domain knowledge, which is often only embedded in the reward function.We present a novel Hierarchical Reinforcement Learning framework based on the hierarchical design approach typical of control theory. We developed our framework extending the block diagram representation of control systems to fit the needs of a Hierarchical Reinforcement Learning scenario, thus giving the possibility to integrate domain knowledge in an effective hierarchical architecture.
2020
IEEE International Conference on Intelligent Robots and Systems
978-1-7281-4004-9
Reiforcement Learning
Deeep Learning
Hierarchical architecture
File in questo prodotto:
File Dimensione Formato  
Paper.pdf

accesso aperto

Descrizione: Main paper
: Post-Print (DRAFT o Author’s Accepted Manuscript-AAM)
Dimensione 230.13 kB
Formato Adobe PDF
230.13 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1156734
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 2
social impact