Challenges at SemEval 2024 Task 7: Contrastive Learning Approach on Numeral-Aware Language Generation

Ali Zhunis, Hao-yun Chuang


Abstract
Although Large Language Model (LLM) excels on generating headline on ROUGE evaluation, it still fails to reason number and generate news article headline with accurate number. Attending SemEval-2024 Task 7 subtask 3, our team aims on using contrastive loss to increase the understanding of the number from their different expression, and knows to identify between different number and its respective expression. This system description paper uses T5 and BART as the baseline model, comparing its result with and without the constrative loss. The result shows that BART with contrastive loss have excelled all the models, and its performance on the number accuracy has the highest performance among all.
Anthology ID:
2024.semeval-1.236
Volume:
Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Atul Kr. Ojha, A. Seza Doğruöz, Harish Tayyar Madabushi, Giovanni Da San Martino, Sara Rosenthal, Aiala Rosá
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
1659–1662
Language:
URL:
https://aclanthology.org/2024.semeval-1.236
DOI:
Bibkey:
Cite (ACL):
Ali Zhunis and Hao-yun Chuang. 2024. Challenges at SemEval 2024 Task 7: Contrastive Learning Approach on Numeral-Aware Language Generation. In Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024), pages 1659–1662, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Challenges at SemEval 2024 Task 7: Contrastive Learning Approach on Numeral-Aware Language Generation (Zhunis & Chuang, SemEval 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/jeptaln-2024-ingestion/2024.semeval-1.236.pdf
Supplementary material:
 2024.semeval-1.236.SupplementaryMaterial.zip
Supplementary material:
 2024.semeval-1.236.SupplementaryMaterial.txt