Abstract
Changing speaker names consistently throughout a dialogue should not affect its meaning and corresponding outputs for text generation from dialogues. However, pre-trained language models, serving as the backbone for dialogue-processing tasks, have shown to be sensitive to nuances. This may result in unfairness in real-world applications. No comprehensive analysis of this problem has been done in the past. In this work, we propose to quantitatively measure a model’s sensitivity on speaker names, and comprehensively evaluate a number of known methods for reducing speaker name sensitivity, including a novel approach of our own. Extensive experiments on multiple datasets provide a benchmark for this problem and show the favorable performance of our approach in sensitivity reduction and quality of generation.- Anthology ID:
- 2023.findings-acl.129
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2023
- Month:
- July
- Year:
- 2023
- Address:
- Toronto, Canada
- Editors:
- Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2058–2073
- Language:
- URL:
- https://aclanthology.org/2023.findings-acl.129
- DOI:
- 10.18653/v1/2023.findings-acl.129
- Cite (ACL):
- Qi Jia, Haifeng Tang, and Kenny Zhu. 2023. Reducing Sensitivity on Speaker Names for Text Generation from Dialogues. In Findings of the Association for Computational Linguistics: ACL 2023, pages 2058–2073, Toronto, Canada. Association for Computational Linguistics.
- Cite (Informal):
- Reducing Sensitivity on Speaker Names for Text Generation from Dialogues (Jia et al., Findings 2023)
- PDF:
- https://preview.aclanthology.org/proper-vol2-ingestion/2023.findings-acl.129.pdf