Abstract
English part-of-speech taggers regularly make egregious errors related to noun-verb ambiguity, despite having achieved 97%+ accuracy on the WSJ Penn Treebank since 2002. These mistakes have been difficult to quantify and make taggers less useful to downstream tasks such as translation and text-to-speech synthesis. This paper creates a new dataset of over 30,000 naturally-occurring non-trivial examples of noun-verb ambiguity. Taggers within 1% of each other when measured on the WSJ have accuracies ranging from 57% to 75% accuracy on this challenge set. Enhancing the strongest existing tagger with contextual word embeddings and targeted training data improves its accuracy to 89%, a 14% absolute (52% relative) improvement. Downstream, using just this enhanced tagger yields a 28% reduction in error over the prior best learned model for homograph disambiguation for textto-speech synthesis.- Anthology ID:
- D18-1277
- Volume:
- Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
- Month:
- October-November
- Year:
- 2018
- Address:
- Brussels, Belgium
- Venue:
- EMNLP
- SIG:
- SIGDAT
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2562–2572
- Language:
- URL:
- https://aclanthology.org/D18-1277
- DOI:
- 10.18653/v1/D18-1277
- Cite (ACL):
- Ali Elkahky, Kellie Webster, Daniel Andor, and Emily Pitler. 2018. A Challenge Set and Methods for Noun-Verb Ambiguity. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2562–2572, Brussels, Belgium. Association for Computational Linguistics.
- Cite (Informal):
- A Challenge Set and Methods for Noun-Verb Ambiguity (Elkahky et al., EMNLP 2018)
- PDF:
- https://preview.aclanthology.org/nodalida-main-page/D18-1277.pdf