Abstract
At the SemEval-2024 Task 5, the organizers introduce a novel natural language processing (NLP) challenge and dataset within the realm of the United States civil procedure. Each datum within the dataset comprises a comprehensive overview of a legal case, a specific inquiry associated with it, and a potential argument in support of a solution, supplemented with an in-depth rationale elucidating the applicability of the argument within the given context. Derived from a text designed for legal education purposes, this dataset presents a multifaceted benchmarking task for contemporary legal language models. Our manuscript delineates the approach we adopted for participation in this competition. Specifically, we detail the use of a Mistral 7B model to answer the question provided. Our only and best submission reach an F1-score equal to 0.5597 and an Accuracy of 0.5714, outperforming the baseline provided for the task.- Anthology ID:
- 2024.semeval-1.24
- Volume:
- Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)
- Month:
- June
- Year:
- 2024
- Address:
- Mexico City, Mexico
- Editors:
- Atul Kr. Ojha, A. Seza Doğruöz, Harish Tayyar Madabushi, Giovanni Da San Martino, Sara Rosenthal, Aiala Rosá
- Venue:
- SemEval
- SIG:
- SIGLEX
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 155–162
- Language:
- URL:
- https://aclanthology.org/2024.semeval-1.24
- DOI:
- 10.18653/v1/2024.semeval-1.24
- Cite (ACL):
- Marco Siino. 2024. Mistral at SemEval-2024 Task 5: Mistral 7B for argument reasoning in Civil Procedure. In Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024), pages 155–162, Mexico City, Mexico. Association for Computational Linguistics.
- Cite (Informal):
- Mistral at SemEval-2024 Task 5: Mistral 7B for argument reasoning in Civil Procedure (Siino, SemEval 2024)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2024.semeval-1.24.pdf