Abstract
Discrimination between antonyms and synonyms is an important and challenging NLP task. Antonyms and synonyms often share the same or similar contexts and thus are hard to make a distinction. This paper proposes two underlying hypotheses and employs the mixture-of-experts framework as a solution. It works on the basis of a divide-and-conquer strategy, where a number of localized experts focus on their own domains (or subspaces) to learn their specialties, and a gating mechanism determines the space partitioning and the expert mixture. Experimental results have shown that our method achieves the state-of-the-art performance on the task.- Anthology ID:
- 2021.acl-short.71
- Volume:
- Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
- Month:
- August
- Year:
- 2021
- Address:
- Online
- Editors:
- Chengqing Zong, Fei Xia, Wenjie Li, Roberto Navigli
- Venues:
- ACL | IJCNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 558–564
- Language:
- URL:
- https://aclanthology.org/2021.acl-short.71
- DOI:
- 10.18653/v1/2021.acl-short.71
- Cite (ACL):
- Zhipeng Xie and Nan Zeng. 2021. A Mixture-of-Experts Model for Antonym-Synonym Discrimination. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 558–564, Online. Association for Computational Linguistics.
- Cite (Informal):
- A Mixture-of-Experts Model for Antonym-Synonym Discrimination (Xie & Zeng, ACL-IJCNLP 2021)
- PDF:
- https://preview.aclanthology.org/ingest-acl-2023-videos/2021.acl-short.71.pdf
- Code
- zengnan1997/moe-asd