Improving Numeracy by Input Reframing and Quantitative Pre-Finetuning Task
Chung-Chi Chen, Hiroya Takamura, Ichiro Kobayashi, Yusuke Miyao
Abstract
Numbers have unique characteristics to words. Teaching models to understand numbers in text is an open-ended research question. Instead of discussing the required calculation skills, this paper focuses on a more fundamental topic: understanding numerals. We point out that innumeracy—the inability to handle basic numeral concepts—exists in most pretrained language models (LMs), and we propose a method to solve this issue by exploring the notation of numbers. Further, we discuss whether changing notation and pre-finetuning along with the comparing-number task can improve performance in three benchmark datasets containing quantitative-related tasks. The results of this study indicate that input reframing and the proposed pre-finetuning task is useful for RoBERTa.- Anthology ID:
- 2023.findings-eacl.4
- Volume:
- Findings of the Association for Computational Linguistics: EACL 2023
- Month:
- May
- Year:
- 2023
- Address:
- Dubrovnik, Croatia
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 69–77
- Language:
- URL:
- https://aclanthology.org/2023.findings-eacl.4
- DOI:
- Cite (ACL):
- Chung-Chi Chen, Hiroya Takamura, Ichiro Kobayashi, and Yusuke Miyao. 2023. Improving Numeracy by Input Reframing and Quantitative Pre-Finetuning Task. In Findings of the Association for Computational Linguistics: EACL 2023, pages 69–77, Dubrovnik, Croatia. Association for Computational Linguistics.
- Cite (Informal):
- Improving Numeracy by Input Reframing and Quantitative Pre-Finetuning Task (Chen et al., Findings 2023)
- PDF:
- https://preview.aclanthology.org/remove-xml-comments/2023.findings-eacl.4.pdf