Yu Fan
2026
Can Reasoning Help Large Language Models Capture Human Annotator Disagreement?
Jingwei Ni | Yu Fan | Vilém Zouhar | Donya Rooein | Alexander Miserlis Hoyle | Mrinmaya Sachan | Markus Leippold | Dirk Hovy | Elliott Ash
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Jingwei Ni | Yu Fan | Vilém Zouhar | Donya Rooein | Alexander Miserlis Hoyle | Mrinmaya Sachan | Markus Leippold | Dirk Hovy | Elliott Ash
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Variation in human annotation (i.e., disagreements) is common in NLP, often reflecting important information like task subjectivity and sample ambiguity. Modeling this variation is important for applications that are sensitive to such information. Although RLVR-style reasoning (Reinforcement Learning with Verifiable Rewards) has improved Large Language Model (LLM) performance on many tasks, it remains unclear whether such reasoning enables LLMs to capture informative variation in human annotation. In this work, we evaluate the influence of different reasoning settings on LLM disagreement modeling. We systematically evaluate each reasoning setting across model sizes, distribution expression methods, and steering methods, resulting in 60 experimental setups across 3 tasks. Surprisingly, our results show that RLVR-style reasoning degrades performance in disagreement modeling, while naive Chain-of-Thought (CoT) reasoning improves the performance of RLHF LLMs (RL from human feedback). These findings underscore the potential risk of replacing human annotators with reasoning LLMs, especially when disagreements are important.
2025
The Medium Is Not the Message: Deconfounding Document Embeddings via Linear Concept Erasure
Yu Fan | Yang Tian | Shauli Ravfogel | Mrinmaya Sachan | Elliott Ash | Alexander Hoyle
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Yu Fan | Yang Tian | Shauli Ravfogel | Mrinmaya Sachan | Elliott Ash | Alexander Hoyle
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Embedding-based similarity metrics between text sequences can be influenced not just by the content dimensions we most care about, but can also be biased by spurious attributes like the text’s source or language. These document confounders cause problems for many applications, but especially those that need to pool texts from different corpora. This paper shows that a debiasing algorithm that removes information about observed confounders from the encoder representations substantially reduces these biases at a minimal computational cost. Document similarity and clustering metrics improve across every embedding variant and task we evaluate—often dramatically. Interestingly, performance on out-of-distribution benchmarks is not impacted, indicating that the embeddings are not otherwise degraded.
Co-DETECT: Collaborative Discovery of Edge Cases in Text Classification
Chenfei Xiong | Jingwei Ni | Yu Fan | Vilém Zouhar | Donya Rooein | Lorena Calvo-Bartolomé | Alexander Hoyle | Zhijing Jin | Mrinmaya Sachan | Markus Leippold | Dirk Hovy | Mennatallah El-Assady | Elliott Ash
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Chenfei Xiong | Jingwei Ni | Yu Fan | Vilém Zouhar | Donya Rooein | Lorena Calvo-Bartolomé | Alexander Hoyle | Zhijing Jin | Mrinmaya Sachan | Markus Leippold | Dirk Hovy | Mennatallah El-Assady | Elliott Ash
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
We introduce Co-DETECT (Collaborative Discovery of Edge cases in TExt ClassificaTion), a novel mixed-initiative annotation framework that integrates human expertise with automatic annotation guided by large language models (LLMs). Co-DETECT starts with an initial, sketch-level codebook and dataset provided by a domain expert, then leverages the LLM to annotate the data and identify edge cases that are not well described by the initial codebook. Specifically, Co-DETECT flags challenging examples, induces high-level, generalizable descriptions of edge cases, and assists user in incorporating edge case handling rules to improve the codebook. This iterative process enables more effective handling of nuanced phenomena through compact, generalizable annotation rules. Extensive user study, qualitative and quantitative analyses prove the effectiveness of Co-DETECT.