A multi-step loss function for robust learning of the dynamics in model-based reinforcement learning

Benechehab, Abdelhakim; Thomas, Albert; Paolo, Giuseppe; Filippone, Maurizio; Kegl, Balazs
Submitted to ArXiV, 5 February 2024

In model-based reinforcement learning, most algorithms rely on simulating trajectories from onestep models of the dynamics learned on data. A critical challenge of this approach is the compounding of one-step prediction errors as the length of the trajectory grows. In this paper we tackle this issue by using a multi-step objective to train one-step models. Our objective is a weighted sum of the mean squared error (MSE) loss at various future horizons. We find that this new loss is particularly useful when the data is noisy (additive Gaussian noise in the observations), which is often the case in real-life environments. To support the multi-step loss, first we study its properties in two tractable cases: i) uni-dimensional linear system, and ii) two-parameter non-linear system. Second, we show in a variety of tasks (environments or datasets) that the models learned with this loss achieve a significant improvement in terms of the averaged R2-score on future prediction horizons. Finally, in the pure batch reinforcement learning setting, we demonstrate that onestep models serve as strong baselines when dynamics are deterministic, while multi-step models would be more advantageous in the presence of noise, highlighting the potential of our approach in real-world applications.


Type:
Journal
Date:
2024-02-05
Department:
Data Science
Eurecom Ref:
7602
Copyright:
© EURECOM. Personal use of this material is permitted. The definitive version of this paper was published in Submitted to ArXiV, 5 February 2024 and is available at :

PERMALINK : https://www.eurecom.fr/publication/7602