Abstract
Figurative language is ubiquitous in human communication. However, current NLP models are unable to demonstrate a significant understanding of instances of this phenomena. The EMNLP 2022 shared task on figurative language understanding posed the problem of predicting and explaining the relation between a premise and a hypothesis containing an instance of the use of figurative language. We experiment with different variations of using T5-large for this task and build a model that significantly outperforms the task baseline. Treating it as a new task for T5 and simply finetuning on the data achieves the best score on the defined evaluation. Furthermore, we find that hypothesis-only models are able to achieve most of the performance.- Anthology ID:
- 2022.flp-1.20
- Volume:
- Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
- Month:
- December
- Year:
- 2022
- Address:
- Abu Dhabi, United Arab Emirates (Hybrid)
- Venue:
- FLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 143–149
- Language:
- URL:
- https://aclanthology.org/2022.flp-1.20
- DOI:
- Cite (ACL):
- Yash Kumar Lal and Mohaddeseh Bastan. 2022. SBU Figures It Out: Models Explain Figurative Language. In Proceedings of the 3rd Workshop on Figurative Language Processing (FLP), pages 143–149, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
- Cite (Informal):
- SBU Figures It Out: Models Explain Figurative Language (Lal & Bastan, FLP 2022)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/2022.flp-1.20.pdf