Examination and Extension of Strategies for Improving Personalized Language Modeling via Interpolation

Liqun Shao, Sahitya Mantravadi, Tom Manzini, Alejandro Buendia, Manon Knoertzer, Soundar Srinivasan, Chris Quirk


Abstract
In this paper, we detail novel strategies for interpolating personalized language models and methods to handle out-of-vocabulary (OOV) tokens to improve personalized language models. Using publicly available data from Reddit, we demonstrate improvements in offline metrics at the user level by interpolating a global LSTM-based authoring model with a user-personalized n-gram model. By optimizing this approach with a back-off to uniform OOV penalty and the interpolation coefficient, we observe that over 80% of users receive a lift in perplexity, with an average of 5.4% in perplexity lift per user. In doing this research we extend previous work in building NLIs and improve the robustness of metrics for downstream tasks.
Anthology ID:
2020.nli-1.3
Volume:
Proceedings of the First Workshop on Natural Language Interfaces
Month:
July
Year:
2020
Address:
Online
Editors:
Ahmed Hassan Awadallah, Yu Su, Huan Sun, Scott Wen-tau Yih
Venue:
NLI
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
20–26
Language:
URL:
https://aclanthology.org/2020.nli-1.3
DOI:
10.18653/v1/2020.nli-1.3
Bibkey:
Cite (ACL):
Liqun Shao, Sahitya Mantravadi, Tom Manzini, Alejandro Buendia, Manon Knoertzer, Soundar Srinivasan, and Chris Quirk. 2020. Examination and Extension of Strategies for Improving Personalized Language Modeling via Interpolation. In Proceedings of the First Workshop on Natural Language Interfaces, pages 20–26, Online. Association for Computational Linguistics.
Cite (Informal):
Examination and Extension of Strategies for Improving Personalized Language Modeling via Interpolation (Shao et al., NLI 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/2020.nli-1.3.pdf
Video:
 http://slideslive.com/38929800