CoreLM: Coreference-aware Language Model Fine-Tuning

Nikolaos Stylianou, Ioannis Vlahavas


Abstract
Language Models are the underpin of all modern Natural Language Processing (NLP) tasks. The introduction of the Transformers architecture has contributed significantly into making Language Modeling very effective across many NLP task, leading to significant advancements in the field. However, Transformers come with a big computational cost, which grows quadratically with respect to the input length. This presents a challenge as to understand long texts requires a lot of context. In this paper, we propose a Fine-Tuning framework, named CoreLM, that extends the architecture of current Pretrained Language Models so that they incorporate explicit entity information. By introducing entity representations, we make available information outside the contextual space of the model, which results in a better Language Model for a fraction of the computational cost. We implement our approach using GPT2 and compare the fine-tuned model to the original. Our proposed model achieves a lower Perplexity in GUMBY and LAMBDADA datasets when compared to GPT2 and a fine-tuned version of GPT2 without any changes. We also compare the models’ performance in terms of Accuracy in LAMBADA and Children’s Book Test, with and without the use of model-created coreference annotations.
Anthology ID:
2021.crac-1.8
Volume:
Proceedings of the Fourth Workshop on Computational Models of Reference, Anaphora and Coreference
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Maciej Ogrodniczuk, Sameer Pradhan, Massimo Poesio, Yulia Grishina, Vincent Ng
Venue:
CRAC
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
70–81
Language:
URL:
https://aclanthology.org/2021.crac-1.8
DOI:
10.18653/v1/2021.crac-1.8
Bibkey:
Cite (ACL):
Nikolaos Stylianou and Ioannis Vlahavas. 2021. CoreLM: Coreference-aware Language Model Fine-Tuning. In Proceedings of the Fourth Workshop on Computational Models of Reference, Anaphora and Coreference, pages 70–81, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
CoreLM: Coreference-aware Language Model Fine-Tuning (Stylianou & Vlahavas, CRAC 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2021.crac-1.8.pdf
Software:
 2021.crac-1.8.Software.zip
Video:
 https://preview.aclanthology.org/nschneid-patch-4/2021.crac-1.8.mp4
Data
CBTChildren's Book TestLAMBADAWikiText-103WikiText-2