Abstract
Researchers illustrate improvements in contextual encoding strategies via resultant performance on a battery of shared Natural Language Understanding (NLU) tasks. Many of these tasks are of a categorical prediction variety: given a conditioning context (e.g., an NLI premise), provide a label based on an associated prompt (e.g., an NLI hypothesis). The categorical nature of these tasks has led to common use of a cross entropy log-loss objective during training. We suggest this loss is intuitively wrong when applied to plausibility tasks, where the prompt by design is neither categorically entailed nor contradictory given the context. Log-loss naturally drives models to assign scores near 0.0 or 1.0, in contrast to our proposed use of a margin-based loss. Following a discussion of our intuition, we describe a confirmation study based on an extreme, synthetically curated task derived from MultiNLI. We find that a margin-based loss leads to a more plausible model of plausibility. Finally, we illustrate improvements on the Choice Of Plausible Alternative (COPA) task through this change in loss.- Anthology ID:
- P19-1475
- Volume:
- Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
- Month:
- July
- Year:
- 2019
- Address:
- Florence, Italy
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 4818–4823
- Language:
- URL:
- https://aclanthology.org/P19-1475
- DOI:
- 10.18653/v1/P19-1475
- Cite (ACL):
- Zhongyang Li, Tongfei Chen, and Benjamin Van Durme. 2019. Learning to Rank for Plausible Plausibility. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4818–4823, Florence, Italy. Association for Computational Linguistics.
- Cite (Informal):
- Learning to Rank for Plausible Plausibility (Li et al., ACL 2019)
- PDF:
- https://preview.aclanthology.org/remove-xml-comments/P19-1475.pdf
- Data
- COPA, MultiNLI