Abstract
End-to-end models in NLP rarely encode external world knowledge about length of time. We introduce two effective models for duration prediction, which incorporate external knowledge by reading temporal-related news sentences (time-aware pre-training). Specifically, one model predicts the range/unit where the duration value falls in (R-PRED); and the other predicts the exact duration value (E-PRED). Our best model – E-PRED, substantially outperforms previous work, and captures duration information more accurately than R-PRED. We also demonstrate our models are capable of duration prediction in the unsupervised setting, outperforming the baselines.- Anthology ID:
- 2020.findings-emnlp.302
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2020
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Editors:
- Trevor Cohn, Yulan He, Yang Liu
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 3370–3378
- Language:
- URL:
- https://aclanthology.org/2020.findings-emnlp.302
- DOI:
- 10.18653/v1/2020.findings-emnlp.302
- Cite (ACL):
- Zonglin Yang, Xinya Du, Alexander Rush, and Claire Cardie. 2020. Improving Event Duration Prediction via Time-aware Pre-training. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3370–3378, Online. Association for Computational Linguistics.
- Cite (Informal):
- Improving Event Duration Prediction via Time-aware Pre-training (Yang et al., Findings 2020)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-5/2020.findings-emnlp.302.pdf
- Data
- MC-TACO