Abstract
Prompting pre-trained language models has achieved impressive performance on various NLP tasks, especially in low data regimes. Despite the success of prompting in monolingual settings, applying prompt-based methods in multilingual scenarios has been limited to a narrow set of tasks, due to the high cost of handcrafting multilingual prompts. In this paper, we present the first work on prompt-based multilingual relation classification (RC), by introducing an efficient and effective method that constructs prompts from relation triples and involves only minimal translation for the class labels. We evaluate its performance in fully supervised, few-shot and zero-shot scenarios, and analyze its effectiveness across 14 languages, prompt variants, and English-task training in cross-lingual settings. We find that in both fully supervised and few-shot scenarios, our prompt method beats competitive baselines: fine-tuning XLM-R_EM and null prompts. It also outperforms the random baseline by a large margin in zero-shot experiments. Our method requires little in-language knowledge and can be used as a strong baseline for similar multilingual classification tasks.- Anthology ID:
- 2022.emnlp-main.69
- Volume:
- Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
- Month:
- December
- Year:
- 2022
- Address:
- Abu Dhabi, United Arab Emirates
- Editors:
- Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1059–1075
- Language:
- URL:
- https://aclanthology.org/2022.emnlp-main.69
- DOI:
- 10.18653/v1/2022.emnlp-main.69
- Cite (ACL):
- Yuxuan Chen, David Harbecke, and Leonhard Hennig. 2022. Multilingual Relation Classification via Efficient and Effective Prompting. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1059–1075, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
- Cite (Informal):
- Multilingual Relation Classification via Efficient and Effective Prompting (Chen et al., EMNLP 2022)
- PDF:
- https://preview.aclanthology.org/ingest-acl-2023-videos/2022.emnlp-main.69.pdf