A Data Bootstrapping Recipe for Low-Resource Multilingual Relation Classification
Arijit Nag, Bidisha Samanta, Animesh Mukherjee, Niloy Ganguly, Soumen Chakrabarti
Abstract
Relation classification (sometimes called ‘extraction’) requires trustworthy datasets for fine-tuning large language models, as well as for evaluation. Data collection is challenging for Indian languages, because they are syntactically and morphologically diverse, as well as different from resource-rich languages like English. Despite recent interest in deep generative models for Indian languages, relation classification is still not well-served by public data sets. In response, we present IndoRE, a dataset with 39K entity- and relation-tagged gold sentences in three Indian languages, plus English. We start with a multilingual BERT (mBERT) based system that captures entity span positions and type information and provides competitive monolingual relation classification. Using this system, we explore and compare transfer mechanisms between languages. In particular, we study the accuracy-efficiency tradeoff between expensive gold instances vs. translated and aligned ‘silver’ instances.- Anthology ID:
- 2021.conll-1.45
- Volume:
- Proceedings of the 25th Conference on Computational Natural Language Learning
- Month:
- November
- Year:
- 2021
- Address:
- Online
- Editors:
- Arianna Bisazza, Omri Abend
- Venue:
- CoNLL
- SIG:
- SIGNLL
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 575–587
- Language:
- URL:
- https://aclanthology.org/2021.conll-1.45
- DOI:
- 10.18653/v1/2021.conll-1.45
- Cite (ACL):
- Arijit Nag, Bidisha Samanta, Animesh Mukherjee, Niloy Ganguly, and Soumen Chakrabarti. 2021. A Data Bootstrapping Recipe for Low-Resource Multilingual Relation Classification. In Proceedings of the 25th Conference on Computational Natural Language Learning, pages 575–587, Online. Association for Computational Linguistics.
- Cite (Informal):
- A Data Bootstrapping Recipe for Low-Resource Multilingual Relation Classification (Nag et al., CoNLL 2021)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-2/2021.conll-1.45.pdf