Augmenting Neural Metaphor Detection with Concreteness

Ghadi Alnafesah, Harish Tayyar Madabushi, Mark Lee


Abstract
The idea that a shift in concreteness within a sentence indicates the presence of a metaphor has been around for a while. However, recent methods of detecting metaphor that have relied on deep neural models have ignored concreteness and related psycholinguistic information. We hypothesis that this information is not available to these models and that their addition will boost the performance of these models in detecting metaphor. We test this hypothesis on the Metaphor Detection Shared Task 2020 and find that the addition of concreteness information does in fact boost deep neural models. We also run tests on data from a previous shared task and show similar results.
Anthology ID:
2020.figlang-1.28
Volume:
Proceedings of the Second Workshop on Figurative Language Processing
Month:
July
Year:
2020
Address:
Online
Venue:
Fig-Lang
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
204–210
Language:
URL:
https://aclanthology.org/2020.figlang-1.28
DOI:
10.18653/v1/2020.figlang-1.28
Bibkey:
Cite (ACL):
Ghadi Alnafesah, Harish Tayyar Madabushi, and Mark Lee. 2020. Augmenting Neural Metaphor Detection with Concreteness. In Proceedings of the Second Workshop on Figurative Language Processing, pages 204–210, Online. Association for Computational Linguistics.
Cite (Informal):
Augmenting Neural Metaphor Detection with Concreteness (Alnafesah et al., Fig-Lang 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2020.figlang-1.28.pdf