Residual Learning of Neural Text Generation with n-gram Language Model

Huayang Li, Deng Cai, Jin Xu, Taro Watanabe


Abstract
N-gram language models (LM) has been largely superseded by neural LMs as the latter exhibits better performance. However, we find that n-gram models can achieve satisfactory performance on a large proportion of testing cases, indicating they have already captured abundant knowledge of the language with relatively low computational cost. With this observation, we propose to learn a neural LM that fits the residual between an n-gram LM and the real-data distribution. The combination of n-gram LMs and neural LMs not only allows the neural part to focus on deeper understanding of the language, but also provides a flexible way to customize a LM by switching the underlying n-gram model without changing the neural model. Experimental results on three typical language tasks (i.e., language modeling, machine translation, and summarization) demonstrate that our approach attains additional performance gains over popular standalone neural models consistently. We also show that our approach allows for effective domain adaptation by simply switching to a domain-specific n-gram model, without any extra training.
Anthology ID:
2022.findings-emnlp.109
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1523–1533
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.109
DOI:
10.18653/v1/2022.findings-emnlp.109
Bibkey:
Cite (ACL):
Huayang Li, Deng Cai, Jin Xu, and Taro Watanabe. 2022. Residual Learning of Neural Text Generation with n-gram Language Model. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 1523–1533, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Residual Learning of Neural Text Generation with n-gram Language Model (Li et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-2024-clasp/2022.findings-emnlp.109.pdf