Damien Garreau - Professor Data Science
Date: - Location: Eurecom
Abstract: A popular approach to post-training control of large language models (LLMs) is the steering of intermediate latent representations. Namely, identify a well-chosen direction depending on the task at hand and perturbs representations along this direction at inference time. While many propositions exist to pick this direction, considerably less is understood about how to choose the magnitude of the move, whereas its importance is clear: too little and the intended behavior does not emerge, too much and the model’s performance degrades beyond repair. In this work, we propose the first theoretical analysis of steering strength. We characterize its effect on next token probability, presence of a concept, and cross-entropy, deriving precise qualitative laws governing these quantities. Our analysis reveals surprising behaviors, including non-monotonic effects of steering strength. We validate our theoretical predictions empirically on eleven language models, ranging from a small GPT architecture to modern models. Preprint: https://arxiv.org/pdf/2602.02712 Bio: Damien Garreau is a French mathematician and machine learning theorist. He studied mathematics at the École Normale Supérieure in Paris and completed his PhD at Inria in 2017. He then held a postdoctoral position at the Max Planck Institute for Intelligent Systems in Tübingen. In 2019 he became an associate professor at Université Côte d’Azur in Nice. As of April 2024, he is Professor for Theory of Machine Learning at the University of Würzburg. His research focuses on the theoretical foundations of machine learning, particularly explainable AI.