Several variance-reduced versions of REINFORCE based on importance sampling achieve an improved O(ϵ-3) sample complexity to find an ϵ-stationary point, under an unrealistic assumption on the variance of the importance weights. In this paper, we propose the Defensive Policy Gradient (DEF-PG) algorithm, based on defensive importance sampling, achieving the same result without any assumption on the variance of the importance weights. We also show that this is not improvable by establishing a matching Ω(ϵ-3) lower bound, and that REINFORCE with its O(ϵ-4) sample complexity is actually optimal under weaker assumptions on the policy class. Numerical simulations show promising results for the proposed technique compared to similar algorithms based on vanilla importance sampling.
Sample complexity of variance-reduced policy gradient: weaker assumptions and lower bounds
Papini, Matteo;Metelli, Alberto Maria;Restelli, Marcello
2024-01-01
Abstract
Several variance-reduced versions of REINFORCE based on importance sampling achieve an improved O(ϵ-3) sample complexity to find an ϵ-stationary point, under an unrealistic assumption on the variance of the importance weights. In this paper, we propose the Defensive Policy Gradient (DEF-PG) algorithm, based on defensive importance sampling, achieving the same result without any assumption on the variance of the importance weights. We also show that this is not improvable by establishing a matching Ω(ϵ-3) lower bound, and that REINFORCE with its O(ϵ-4) sample complexity is actually optimal under weaker assumptions on the policy class. Numerical simulations show promising results for the proposed technique compared to similar algorithms based on vanilla importance sampling.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


