Abstract
Many works employed prompt tuning methods to automatically optimize prompt queries and extract the factual knowledge stored in Pre-trained Language Models. In this paper, we observe that the optimized prompts, including discrete prompts and continuous prompts, exhibit undesirable object bias. To handle this problem, we propose a novel prompt tuning method called MeCoD consisting of three modules: Prompt Encoder, Object Equalization and Biased Object Obstruction. Experimental results show that MeCoD can significantly reduce the object bias and at the same time improve accuracy of factual knowledge extraction.- Anthology ID:
- 2023.findings-acl.270
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2023
- Month:
- July
- Year:
- 2023
- Address:
- Toronto, Canada
- Editors:
- Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 4420–4432
- Language:
- URL:
- https://aclanthology.org/2023.findings-acl.270
- DOI:
- 10.18653/v1/2023.findings-acl.270
- Cite (ACL):
- Yuhang Wang, Dongyuan Lu, Chao Kong, and Jitao Sang. 2023. Towards Alleviating the Object Bias in Prompt Tuning-based Factual Knowledge Extraction. In Findings of the Association for Computational Linguistics: ACL 2023, pages 4420–4432, Toronto, Canada. Association for Computational Linguistics.
- Cite (Informal):
- Towards Alleviating the Object Bias in Prompt Tuning-based Factual Knowledge Extraction (Wang et al., Findings 2023)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-3/2023.findings-acl.270.pdf