Learning to vary: Teaching LMs to reproduce human linguistic variability in next-word prediction

Tobias Groot, Salo Lacunes, Evgenia Ilia


Abstract
Natural language generation (NLG) tasks are often subject to inherent variability; e.g. predicting the next word given a context has multiple valid responses, evident when asking multiple humans to complete the task. While having language models (LMs) that are aligned pluralistically, so that they are able to reproduce well the inherent diversity in perspectives of an entire population of interest is clearly beneficial, Ilia and Aziz (2024) show that LMs do not reproduce this type of linguistic variability well. They speculate this inability might stem from the lack of consistent training of LMs with data reflecting this type of inherent variability. As such, we investigate whether training LMs on multiple plausible word continuations per context can improve their ability to reproduce human linguistic variability for next-word prediction. We employ fine-tuning techniques for pre-trained and instruction-tuned models; and demonstrate their potential when fine-tuning GPT-2 and Mistral-7B-IT, using Provo Corpus. Our evaluation, which measures divergence among empirically estimated human and model next-word distributions across contexts before and after fine-tuning, shows that our multi-label fine-tuning improves the LMs’ ability to reproduce linguistic variability; both for contexts that admit higher and lower variability.
Anthology ID:
2025.uncertainlp-main.9
Volume:
Proceedings of the 2nd Workshop on Uncertainty-Aware NLP (UncertaiNLP 2025)
Month:
November
Year:
2025
Address:
Suzhou, China
Editor:
Noidea Noidea
Venues:
UncertaiNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
73–88
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.uncertainlp-main.9/
DOI:
Bibkey:
Cite (ACL):
Tobias Groot, Salo Lacunes, and Evgenia Ilia. 2025. Learning to vary: Teaching LMs to reproduce human linguistic variability in next-word prediction. In Proceedings of the 2nd Workshop on Uncertainty-Aware NLP (UncertaiNLP 2025), pages 73–88, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Learning to vary: Teaching LMs to reproduce human linguistic variability in next-word prediction (Groot et al., UncertaiNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.uncertainlp-main.9.pdf