OCHADAI at SemEval-2022 Task 2: Adversarial Training for Multilingual Idiomaticity Detection

Lis Pereira, Ichiro Kobayashi


Abstract
We propose a multilingual adversarial training model for determining whether a sentence contains an idiomatic expression. Given that a key challenge with this task is the limited size of annotated data, our model relies on pre-trained contextual representations from different multi-lingual state-of-the-art transformer-based language models (i.e., multilingual BERT and XLM-RoBERTa), and on adversarial training, a training method for further enhancing model generalization and robustness. Without relying on any human-crafted features, knowledgebase, or additional datasets other than the target datasets, our model achieved competitive results and ranked 6thplace in SubTask A (zero-shot) setting and 15thplace in SubTask A (one-shot) setting
Anthology ID:
2022.semeval-1.27
Volume:
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Guy Emerson, Natalie Schluter, Gabriel Stanovsky, Ritesh Kumar, Alexis Palmer, Nathan Schneider, Siddharth Singh, Shyam Ratan
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
217–220
Language:
URL:
https://aclanthology.org/2022.semeval-1.27
DOI:
10.18653/v1/2022.semeval-1.27
Bibkey:
Cite (ACL):
Lis Pereira and Ichiro Kobayashi. 2022. OCHADAI at SemEval-2022 Task 2: Adversarial Training for Multilingual Idiomaticity Detection. In Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022), pages 217–220, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
OCHADAI at SemEval-2022 Task 2: Adversarial Training for Multilingual Idiomaticity Detection (Pereira & Kobayashi, SemEval 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2022.semeval-1.27.pdf
Video:
 https://preview.aclanthology.org/naacl24-info/2022.semeval-1.27.mp4