ignore at SemEval-2024 Task 5: A Legal Classification Model with Summary Generation and Contrastive Learning

Binjie Sun, Xiaobing Zhou


Abstract
This paper describes our work for SemEval-2024 Task 5: The Legal Argument Reasoning Task in Civil Procedure. After analyzing the task requirements and the training dataset, we used data augmentation, adopted the large model GPT for summary generation, and added supervised contrastive learning to the basic BERT model. Our system achieved an F1 score of 0.551, ranking 14th in the competition leaderboard. Our system achieves an F1 score improvement of 0.1241 over the official baseline model.
Anthology ID:
2024.semeval-1.80
Volume:
Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Atul Kr. Ojha, A. Seza Doğruöz, Harish Tayyar Madabushi, Giovanni Da San Martino, Sara Rosenthal, Aiala Rosá
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
530–535
Language:
URL:
https://aclanthology.org/2024.semeval-1.80
DOI:
Bibkey:
Cite (ACL):
Binjie Sun and Xiaobing Zhou. 2024. ignore at SemEval-2024 Task 5: A Legal Classification Model with Summary Generation and Contrastive Learning. In Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024), pages 530–535, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
ignore at SemEval-2024 Task 5: A Legal Classification Model with Summary Generation and Contrastive Learning (Sun & Zhou, SemEval 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-checklist/2024.semeval-1.80.pdf
Supplementary material:
 2024.semeval-1.80.SupplementaryMaterial.txt
Supplementary material:
 2024.semeval-1.80.SupplementaryMaterial.zip