Prefix Text as a Yarn: Eliciting Non-English Alignment in Foundation Language Model

Runzhe Zhan, Xinyi Yang, Derek Wong, Lidia Chao, Yue Zhang


Abstract
While supervised fine-tuning (SFT) has been a straightforward approach for tailoring the output of foundation large language model (LLM) to specific preferences, concerns have been raised about the depth of this alignment, with some critiques suggesting it is merely “superficial”. We critically examine this hypothesis within the scope of cross-lingual generation tasks, proposing that the effectiveness of SFT may be constrained by its reliance on prior tokens to guide cross-lingual generation. Based on this crucial insight, and in response to the challenges posed by the costly and limited availability of non-English data for SFT, we introduce a novel training-free alignment method named PreTTY, which employs minimal task-related prior tokens to bridge the foundation LLM and the SFT LLM, achieving comparable performance without training. Experiments on machine translation and part-of-speech tagging across seven languages demonstrate the efficacy of PreTTY in cross-lingual settings. Remarkably, by initiating the decoding process with only one or two prior tokens, foundation LLMs can attain up to 98% of the performance metrics of their SFT counterparts. This method presents a cost-effective alternative to traditional SFT and advances the democratization of multilingual LLMs.
Anthology ID:
2024.findings-acl.722
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12131–12145
Language:
URL:
https://aclanthology.org/2024.findings-acl.722
DOI:
10.18653/v1/2024.findings-acl.722
Bibkey:
Cite (ACL):
Runzhe Zhan, Xinyi Yang, Derek Wong, Lidia Chao, and Yue Zhang. 2024. Prefix Text as a Yarn: Eliciting Non-English Alignment in Foundation Language Model. In Findings of the Association for Computational Linguistics: ACL 2024, pages 12131–12145, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Prefix Text as a Yarn: Eliciting Non-English Alignment in Foundation Language Model (Zhan et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/add_acl24_videos/2024.findings-acl.722.pdf
Video:
 https://preview.aclanthology.org/add_acl24_videos/2024.findings-acl.722.mp4