Learning Outside the Box: Discourse-level Features Improve Metaphor Identification

Jesse Mu, Helen Yannakoudakis, Ekaterina Shutova


Abstract
Most current approaches to metaphor identification use restricted linguistic contexts, e.g. by considering only a verb’s arguments or the sentence containing a phrase. Inspired by pragmatic accounts of metaphor, we argue that broader discourse features are crucial for better metaphor identification. We train simple gradient boosting classifiers on representations of an utterance and its surrounding discourse learned with a variety of document embedding methods, obtaining near state-of-the-art results on the 2018 VU Amsterdam metaphor identification task without the complex metaphor-specific features or deep neural architectures employed by other systems. A qualitative analysis further confirms the need for broader context in metaphor processing.
Anthology ID:
N19-1059
Volume:
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Editors:
Jill Burstein, Christy Doran, Thamar Solorio
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
596–601
Language:
URL:
https://aclanthology.org/N19-1059
DOI:
10.18653/v1/N19-1059
Bibkey:
Cite (ACL):
Jesse Mu, Helen Yannakoudakis, and Ekaterina Shutova. 2019. Learning Outside the Box: Discourse-level Features Improve Metaphor Identification. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 596–601, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
Learning Outside the Box: Discourse-level Features Improve Metaphor Identification (Mu et al., NAACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/autopr/N19-1059.pdf
Code
 jayelm/broader-metaphor