Is There a One-Model-Fits-All Approach to Information Extraction? Revisiting Task Definition Biases

Wenhao Huang, Qianyu He, Zhixu Li, Jiaqing Liang, Yanghua Xiao


Abstract
Definition bias is a negative phenomenon that can mislead models. However, definition bias in information extraction appears not only across datasets from different domains but also within datasets sharing the same domain. We identify two types of definition bias in IE: bias among information extraction datasets and bias between information extraction datasets and instruction tuning datasets. To systematically investigate definition bias, we conduct three probing experiments to quantitatively analyze it and discover the limitations of unified information extraction and large language models in solving definition bias. To mitigate definition bias in information extraction, we propose a multi-stage framework consisting of definition bias measurement, bias-aware fine-tuning, and task-specific bias mitigation. Experimental results demonstrate the effectiveness of our framework in addressing definition bias.
Anthology ID:
2024.findings-emnlp.601
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10274–10287
Language:
URL:
https://preview.aclanthology.org/add-emnlp-2024-awards/2024.findings-emnlp.601/
DOI:
10.18653/v1/2024.findings-emnlp.601
Bibkey:
Cite (ACL):
Wenhao Huang, Qianyu He, Zhixu Li, Jiaqing Liang, and Yanghua Xiao. 2024. Is There a One-Model-Fits-All Approach to Information Extraction? Revisiting Task Definition Biases. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 10274–10287, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Is There a One-Model-Fits-All Approach to Information Extraction? Revisiting Task Definition Biases (Huang et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/add-emnlp-2024-awards/2024.findings-emnlp.601.pdf