Inverse Reinforcement Learning addresses the problem of inferring an expert’s reward function from demonstrations. However, in many applications, we not only have access to the expert’s near-optimal behaviour, but we also observe part of her learning process. In this paper, we propose a new algorithm for this setting, in which the goal is to recover the reward function being optimized by an agent, given a sequence of policies produced during learning. Our approach is based on the assumption that the observed agent is updating her policy parameters along the gradient direction. Then we extend our method to deal with the more realistic scenario where we only have access to a dataset of learning trajectories. For both settings, we provide theoretical insights into our algorithms’ performance. Finally, we evaluate the approach in a simulated GridWorld environment and on the MuJoCo environments, comparing it with the state-of-the-art baseline.

Inverse Reinforcement Learning from a Gradient-based Learner

Giorgia Ramponi;Gianluca Drappo;Marcello Restelli
2020-01-01

Abstract

Inverse Reinforcement Learning addresses the problem of inferring an expert’s reward function from demonstrations. However, in many applications, we not only have access to the expert’s near-optimal behaviour, but we also observe part of her learning process. In this paper, we propose a new algorithm for this setting, in which the goal is to recover the reward function being optimized by an agent, given a sequence of policies produced during learning. Our approach is based on the assumption that the observed agent is updating her policy parameters along the gradient direction. Then we extend our method to deal with the more realistic scenario where we only have access to a dataset of learning trajectories. For both settings, we provide theoretical insights into our algorithms’ performance. Finally, we evaluate the approach in a simulated GridWorld environment and on the MuJoCo environments, comparing it with the state-of-the-art baseline.
2020
34th Conference on Neural Information Processing Systems, NeurIPS 2020
File in questo prodotto:
File Dimensione Formato  
NeurIPS-2020-inverse-reinforcement-learning-from-a-gradient-based-learner-Paper.pdf

accesso aperto

Descrizione: Articolo principale
: Publisher’s version
Dimensione 374.5 kB
Formato Adobe PDF
374.5 kB Adobe PDF Visualizza/Apri
NeurIPS-2020-inverse-reinforcement-learning-from-a-gradient-based-learner-Supplemental.pdf

accesso aperto

Descrizione: Materiale supplementare
: Publisher’s version
Dimensione 414.42 kB
Formato Adobe PDF
414.42 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1167115
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 6
  • ???jsp.display-item.citation.isi??? ND
social impact