Towards a Language Model for Temporal Commonsense Reasoning

Mayuko Kimura, Lis Kanashiro Pereira, Ichiro Kobayashi


Abstract
Temporal commonsense reasoning is a challenging task as it requires temporal knowledge usually not explicit in text. In this work, we propose an ensemble model for temporal commonsense reasoning. Our model relies on pre-trained contextual representations from transformer-based language models (i.e., BERT), and on a variety of training methods for enhancing model generalization: 1) multi-step fine-tuning using carefully selected auxiliary tasks and datasets, and 2) a specifically designed temporal masked language model task aimed to capture temporal commonsense knowledge. Our model greatly outperforms the standard fine-tuning approach and strong baselines on the MC-TACO dataset.
Anthology ID:
2021.ranlp-srw.12
Volume:
Proceedings of the Student Research Workshop Associated with RANLP 2021
Month:
September
Year:
2021
Address:
Online
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd.
Note:
Pages:
78–84
Language:
URL:
https://aclanthology.org/2021.ranlp-srw.12
DOI:
Bibkey:
Cite (ACL):
Mayuko Kimura, Lis Kanashiro Pereira, and Ichiro Kobayashi. 2021. Towards a Language Model for Temporal Commonsense Reasoning. In Proceedings of the Student Research Workshop Associated with RANLP 2021, pages 78–84, Online. INCOMA Ltd..
Cite (Informal):
Towards a Language Model for Temporal Commonsense Reasoning (Kimura et al., RANLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2021.ranlp-srw.12.pdf
Data
CosmosQAMC-TACOSWAG