Discovering Financial Hypernyms by Prompting Masked Language Models

Bo Peng, Emmanuele Chersoni, Yu-Yin Hsu, Chu-Ren Huang


Abstract
With the rising popularity of Transformer-based language models, several studies have tried to exploit their masked language modeling capabilities to automatically extract relational linguistic knowledge, although this kind of research has rarely investigated semantic relations in specialized domains. The present study aims at testing a general-domain and a domain-adapted Transformer models on two datasets of financial term-hypernym pairs using the prompt methodology. Our results show that the differences of prompts impact critically on models’ performance, and that domain adaptation on financial text generally improves the capacity of the models to associate the target terms with the right hypernyms, although the more successful models are those retaining a general-domain vocabulary.
Anthology ID:
2022.fnp-1.2
Volume:
Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022
Month:
June
Year:
2022
Address:
Marseille, France
Editors:
Mahmoud El-Haj, Paul Rayson, Nadhem Zmandar
Venue:
FNP
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
10–16
Language:
URL:
https://aclanthology.org/2022.fnp-1.2
DOI:
Bibkey:
Cite (ACL):
Bo Peng, Emmanuele Chersoni, Yu-Yin Hsu, and Chu-Ren Huang. 2022. Discovering Financial Hypernyms by Prompting Masked Language Models. In Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022, pages 10–16, Marseille, France. European Language Resources Association.
Cite (Informal):
Discovering Financial Hypernyms by Prompting Masked Language Models (Peng et al., FNP 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2022.fnp-1.2.pdf