Abstract
Long-range semantic coherence remains a challenge in automatic language generation and understanding. We demonstrate that large language models have insufficiently learned the effect of distant words on next-token prediction. We present coherence boosting, an inference procedure that increases a LM’s focus on a long context. We show the benefits of coherence boosting with pretrained models by distributional analyses of generated ordinary text and dialog responses. It is also found that coherence boosting with state-of-the-art models for various zero-shot NLP tasks yields performance gains with no additional training.- Anthology ID:
- 2022.acl-long.565
- Volume:
- Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
- Month:
- May
- Year:
- 2022
- Address:
- Dublin, Ireland
- Editors:
- Smaranda Muresan, Preslav Nakov, Aline Villavicencio
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 8214–8236
- Language:
- URL:
- https://aclanthology.org/2022.acl-long.565
- DOI:
- 10.18653/v1/2022.acl-long.565
- Cite (ACL):
- Nikolay Malkin, Zhen Wang, and Nebojsa Jojic. 2022. Coherence boosting: When your pretrained language model is not paying enough attention. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8214–8236, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- Coherence boosting: When your pretrained language model is not paying enough attention (Malkin et al., ACL 2022)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-5/2022.acl-long.565.pdf
- Code
- zhenwang9102/coherence-boosting
- Data
- AG News, ARC (AI2 Reasoning Challenge), BoolQ, COPA, HellaSwag, LAMA, LAMBADA, OpenBookQA, PIQA, SST, SST-2, SST-5, WebText