Bit_numeval at SemEval-2024 Task 7: Enhance Numerical Sensitivity and Reasoning Completeness for Quantitative Understanding

Xinyue Liang, Jiawei Li, Yizhe Yang, Yang Gao


Abstract
In this paper, we describe the methods used for Quantitative Natural Language Inference (QNLI), and Quantitative Question Answering (QQA) in task1 of Semeval2024 NumEval. The challenge’s focus is to enhance the model’s quantitative understanding consequently improving its performance on certain tasks. We accomplish this task from two perspectives: (1) By integrating real-world numerical comparison data during the supervised fine-tuning (SFT) phase, we enhanced the model’s numerical sensitivity. (2) We develop an innovative reward model scoring mechanism, leveraging reinforcement learning from human feedback (RLHF) techniques to improve the model’s reasoning completeness.
Anthology ID:
2024.semeval-1.258
Volume:
Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Atul Kr. Ojha, A. Seza Doğruöz, Harish Tayyar Madabushi, Giovanni Da San Martino, Sara Rosenthal, Aiala Rosá
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
1830–1841
Language:
URL:
https://aclanthology.org/2024.semeval-1.258
DOI:
Bibkey:
Cite (ACL):
Xinyue Liang, Jiawei Li, Yizhe Yang, and Yang Gao. 2024. Bit_numeval at SemEval-2024 Task 7: Enhance Numerical Sensitivity and Reasoning Completeness for Quantitative Understanding. In Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024), pages 1830–1841, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Bit_numeval at SemEval-2024 Task 7: Enhance Numerical Sensitivity and Reasoning Completeness for Quantitative Understanding (Liang et al., SemEval 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/retraction/2024.semeval-1.258.pdf
Supplementary material:
 2024.semeval-1.258.SupplementaryMaterial.txt