The emerging zero-shot capabilities of Large Language Models (LLMs) have led to their applications in areas extending well beyond natural language processing tasks. In reinforcement learning, while LLMs have been extensively used in text-based environments, their integration with continuous state spaces remains understudied. In this paper, we investigate how pre-trained LLMs can be leveraged to predict in context the dynamics of continuous Markov decision processes. We identify handling multivariate data and incorporating the control signal as key challenges that limit the potential of LLMs' deployment in this setup and propose Disentangled In-Context Learning (DICL) to address them. We present proof-of-concept applications in two reinforcement learning settings: model-based policy evaluation and data-augmented off-policy reinforcement learning, supported by theoretical analysis of the proposed methods. Our experiments further demonstrate that our approach produces well-calibrated uncertainty estimates. We release the code at https://github.com/abenechehab/dicl.
Zero-shot model-based reinforcement learning using large language models
ICLR 2025, 13th International Conference on Learning Representations, 24-28 April 2025, Singapore, Singapore
Type:
Poster / Demo
City:
Singapore
Date:
2025-01-22
Department:
Data Science
Eurecom Ref:
8081
Copyright:
© EURECOM. Personal use of this material is permitted. The definitive version of this paper was published in ICLR 2025, 13th International Conference on Learning Representations, 24-28 April 2025, Singapore, Singapore and is available at :
See also:
PERMALINK : https://www.eurecom.fr/publication/8081