Abstract
Language models encode linguistic proprieties and are used as input for more specific models. Using their word representations as-is for specialised and low-resource domains might be less efficient. Methods of adapting them exist, but these models often overlook global information about how words, terms, and concepts relate to each other in a corpus due to their strong reliance on attention. We consider that global information can influence the results of the downstream tasks, and combination with contextual information is performed using graph convolution networks or GCN built on vocabulary graphs. By outperforming baselines, we show that this architecture is profitable for domain-specific tasks.- Anthology ID:
- 2022.dlg4nlp-1.5
- Volume:
- Proceedings of the 2nd Workshop on Deep Learning on Graphs for Natural Language Processing (DLG4NLP 2022)
- Month:
- July
- Year:
- 2022
- Address:
- Seattle, Washington
- Editors:
- Lingfei Wu, Bang Liu, Rada Mihalcea, Jian Pei, Yue Zhang, Yunyao Li
- Venue:
- DLG4NLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 36–42
- Language:
- URL:
- https://aclanthology.org/2022.dlg4nlp-1.5
- DOI:
- 10.18653/v1/2022.dlg4nlp-1.5
- Cite (ACL):
- Merieme Bouhandi, Emmanuel Morin, and Thierry Hamon. 2022. Graph Neural Networks for Adapting Off-the-shelf General Domain Language Models to Low-Resource Specialised Domains. In Proceedings of the 2nd Workshop on Deep Learning on Graphs for Natural Language Processing (DLG4NLP 2022), pages 36–42, Seattle, Washington. Association for Computational Linguistics.
- Cite (Informal):
- Graph Neural Networks for Adapting Off-the-shelf General Domain Language Models to Low-Resource Specialised Domains (Bouhandi et al., DLG4NLP 2022)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-3/2022.dlg4nlp-1.5.pdf