Yann Choho
2026
In-Distribution Steering: Balancing Control and Coherence in Language Model Generation
Arthur Vogels | Benjamin Wong | Yann Choho | Annabelle Blangero | Milan Bhan
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Arthur Vogels | Benjamin Wong | Yann Choho | Annabelle Blangero | Milan Bhan
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Activation steering methods control large language model (LLM) behavior by modifying internal activations at inference time. However, most existing activation steering methods rely on a fixed steering strength, leading to either insufficient control or unadapted intervention that degrades text plausibility and coherence. We introduce In-Distribution Steering (IDS), a novel method that adapts steering strength based on the input data distribution in representation space. IDS dynamically adjusts interventions according to how far a given input lies within the distribution, enabling adaptive intervention and generation stability during text generation. Experiments demonstrate that IDS achieves strong accuracy on classification tasks while producing coherent text without collapse, making IDS particularly well suited for real-world applications.
2025
Towards Achieving Concept Completeness for Textual Concept Bottleneck Models
Milan Bhan | Yann Choho | Jean-Noël Vittaut | Nicolas Chesneau | Pierre Moreau | Marie-Jeanne Lesot
Findings of the Association for Computational Linguistics: EMNLP 2025
Milan Bhan | Yann Choho | Jean-Noël Vittaut | Nicolas Chesneau | Pierre Moreau | Marie-Jeanne Lesot
Findings of the Association for Computational Linguistics: EMNLP 2025
This paper proposes Complete Textual Concept Bottleneck Model (CT-CBM), a novel TCBM generator building concept labels in a fully unsupervised manner using a small language model, eliminating both the need for predefined human labeled concepts and LLM annotations. CT-CBM iteratively targets and adds important and identifiable concepts in the bottleneck layer to create a complete concept basis. CT-CBM achieves striking results against competitors in terms of concept basis completeness and concept detection accuracy, offering a promising solution to reliably enhance interpretability of NLP classifiers.