Paying More Attention to Source Context: Mitigating Unfaithful Translations from Large Language Model
Hongbin Zhang, Kehai Chen, Xuefeng Bai, Yang Xiang, Min Zhang
Abstract
Large language models (LLMs) have showcased their remarkable capabilities to handle various downstream tasks, including multilingual machine translation ability. Despite their impressive performance, decoder-only LLMs lack an explicit alignment between source and target contexts, leading to translation that may not faithfully represent the original content. To address this, we propose three learning strategies to encourage LLMs to pay more attention to the source context during translation: 1) adjusting attention weights on the source context by adaptive attention re-weighting; 2) suppressing the irrelevant target prefix using contrastive decoding; 3) avoiding excessive reliance on the target prefix through target-constrained tuning. To verify the effectiveness of our model, we curate a new dataset specifically focusing on unfaithful translations generated by LLMs. Experimental results on both human-collected and general test sets verify the effectiveness of our model across multiple language pairs. Further human evaluation demonstrates the efficacy of our method in reducing hallucinatory translation and improving the fidelity of translations.- Anthology ID:
- 2024.findings-acl.821
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2024
- Month:
- August
- Year:
- 2024
- Address:
- Bangkok, Thailand
- Editors:
- Lun-Wei Ku, Andre Martins, Vivek Srikumar
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 13816–13836
- Language:
- URL:
- https://aclanthology.org/2024.findings-acl.821
- DOI:
- 10.18653/v1/2024.findings-acl.821
- Cite (ACL):
- Hongbin Zhang, Kehai Chen, Xuefeng Bai, Yang Xiang, and Min Zhang. 2024. Paying More Attention to Source Context: Mitigating Unfaithful Translations from Large Language Model. In Findings of the Association for Computational Linguistics: ACL 2024, pages 13816–13836, Bangkok, Thailand. Association for Computational Linguistics.
- Cite (Informal):
- Paying More Attention to Source Context: Mitigating Unfaithful Translations from Large Language Model (Zhang et al., Findings 2024)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2024.findings-acl.821.pdf