YNU-HPCC at SemEval-2025 Task 6: Using BERT Model with R-drop for Promise Verification

Dehui Deng, You Zhang, Jin Wang, Dan Xu, Xuejie Zhang


Abstract
This paper presents our participation in the SemEval-2025 task 6: multinational, multilingual, multi-industry promise verification. The SemEval-2025 Task 6 aims to extract Promise Identification, Supporting Evidence, Clarity of the Promise-Evidence Pair, and Timing for Verification from the commitments made to businesses and governments. Use these data to verify whether companies and governments have fulfilled their commitments. In this task, we participated in the English task, whichincluded analysis of numbers in the text, reading comprehension of the text content and multi-label classification. Our model introduces regularization dropout based on Bert-base to focus on the stability of non-target classes, improve the robustness of the model, and ultimately improve the indicators. Our approach obtained competitive results in subtasks.
Anthology ID:
2025.semeval-1.248
Volume:
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Sara Rosenthal, Aiala Rosá, Debanjan Ghosh, Marcos Zampieri
Venues:
SemEval | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1905–1911
Language:
URL:
https://preview.aclanthology.org/corrections-2025-08/2025.semeval-1.248/
DOI:
Bibkey:
Cite (ACL):
Dehui Deng, You Zhang, Jin Wang, Dan Xu, and Xuejie Zhang. 2025. YNU-HPCC at SemEval-2025 Task 6: Using BERT Model with R-drop for Promise Verification. In Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025), pages 1905–1911, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
YNU-HPCC at SemEval-2025 Task 6: Using BERT Model with R-drop for Promise Verification (Deng et al., SemEval 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/corrections-2025-08/2025.semeval-1.248.pdf