Multi-timestep models for model-based reinforcement learning

Benechehab, Abdelhakim; Paolo, Giuseppe; Thomas, Albert; Filippone, Maurizio; Kégl, Balázs
Submitted to ArXiV, 11 October 2023

In model-based reinforcement learning (MBRL), most algorithms rely on simulating trajectories from one-step dynamics models learned on data. A critical challenge of this approach is the compounding of one-step prediction errors as length of the trajectory grows. In this paper we tackle this issue by using a multitimestep objective to train one-step models. Our objective is a weighted sum of a loss function (e.g., negative log-likelihood) at various future horizons. We explore and test a range of weights profiles. We find that exponentially decaying weights lead to models that significantly improve the long-horizon R2 score. This improvement is particularly noticeable when the models were evaluated on noisy data. Finally, using a soft actor-critic (SAC) agent in pure batch reinforcement learning (RL) and iterated batch RL scenarios, we found that our multi-timestep models outperform or match standard one-step models. This was especially evident in a noisy variant of the considered environment, highlighting the potential of our approach in real-world applications. 


Type:
Conférence
Date:
2023-10-11
Department:
Data Science
Eurecom Ref:
8087
Copyright:
© EURECOM. Personal use of this material is permitted. The definitive version of this paper was published in Submitted to ArXiV, 11 October 2023 and is available at :

PERMALINK : https://www.eurecom.fr/publication/8087