Taehee Lee
2025
Correcting Negative Bias in Large Language Models through Negative Attention Score Alignment
Sangwon Yu
|
Jongyoon Song
|
Bongkyu Hwang
|
Hoyoung Kang
|
Sooah Cho
|
Junhwa Choi
|
Seongho Joe
|
Taehee Lee
|
Youngjune Gwon
|
Sungroh Yoon
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
A binary decision task, like yes-no questions or answer verification, reflects a significant real-world scenario such as where users look for confirmation about the correctness of their decisions on specific issues. In this work, we observe that language models exhibit a negative bias in the binary decisions of complex reasoning tasks. Based on our observations and the rationale about attention-based model dynamics, we propose a negative attention score (NAS) to systematically and quantitatively formulate negative bias. Based on NAS, we identify attention heads that attend to negative tokens provided in the instructions as answer candidate of binary decisions, regardless of the question in the prompt, and validate their association with the negative bias. Additionally, we propose the negative attention score alignment (NASA) method, which is a parameter-efficient fine-tuning technique to address the extracted negatively biased attention heads. Experimental results from various domains of reasoning tasks and large model search space demonstrate that NASA significantly reduces the gap between precision and recall caused by negative bias while preserving their generalization abilities.
Search
Fix data
Co-authors
- Sooah Cho 1
- Junhwa Choi 1
- Youngjune Gwon 1
- Bongkyu Hwang 1
- Seongho Joe 1
- show all...