Automatically learning fallback strategies with model-free reinforcement learning in safety-critical driving scenarios

Lecerf, Ugo; Yemdji Tchassi, Christelle; Aubert, Sébastien; Michiardi, Pietro
ICMLT 2022, International Conference on Machine Learning Technologies 2022, 11-13 March 2022, Rome, Italy

When learning to behave in a stochastic environment where safety is critical, such as driving a vehicle in traffic, it is natural for human drivers to plan fallback strategies as a backup to use if ever there is an unexpected change in the environment. Knowing to expect the unexpected, and planning for such outcomes, increases our capability for being robust to unseen scenarios and may help prevent catastrophic failures. Control of Autonomous Vehicles (AVs) has a particular interest in knowing when and how to use fallback strategies in the interest of safety. Due to imperfect information available to an AV about its environment, it is important to have alternate strategies at the ready which might not have been deduced from the original training data distribution. In this paper we present a principled approach for a modelfree Reinforcement Learning (RL) agent to capture multiple modes of behaviour in an environment. We introduce an extra pseudo-reward term to the reward model, to encourage exploration to areas of state-space different from areas privileged by the optimal policy. We base this reward term on a distance metric between the trajectories of agents, in order to force policies to focus on different areas of state-space than the initial exploring agent. Throughout the paper, we refer to this particular training paradigm as learning fallback strategies. We apply this method to an autonomous driving scenario, and show that we are able to learn useful policies that would have otherwise been missed out on during training, and unavailable to use when executing the control algorithm. 


DOI
Type:
Conference
City:
Rome
Date:
2022-03-11
Department:
Data Science
Eurecom Ref:
6872
Copyright:
© ACM, 2022. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ICMLT 2022, International Conference on Machine Learning Technologies 2022, 11-13 March 2022, Rome, Italy https://doi.org/10.1145/3529399.3529432

PERMALINK : https://www.eurecom.fr/publication/6872