NLP at UC Santa Cruz at SemEval-2024 Task 5: Legal Answer Validation using Few-Shot Multi-Choice QA

Anish Pahilajani, Samyak Jain, Devasha Trivedi


Abstract
This paper presents our submission to the SemEval 2024 Task 5: The Legal Argument Reasoning Task in Civil Procedure. We present two approaches to solving the task of legal answer validation, given an introduction to the case, a question and an answer candidate. Firstly, we fine-tuned pre-trained BERT-based models and found that models trained on domain knowledge perform better. Secondly, we performed few-shot prompting on GPT models and found that reformulating the answer validation task to be a multiple-choice QA task remarkably improves the performance of the model. Our best submission is a BERT-based model that achieved the 7th place out of 20.
Anthology ID:
2024.semeval-1.189
Volume:
Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Atul Kr. Ojha, A. Seza Doğruöz, Harish Tayyar Madabushi, Giovanni Da San Martino, Sara Rosenthal, Aiala Rosá
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
1309–1314
Language:
URL:
https://aclanthology.org/2024.semeval-1.189
DOI:
Bibkey:
Cite (ACL):
Anish Pahilajani, Samyak Jain, and Devasha Trivedi. 2024. NLP at UC Santa Cruz at SemEval-2024 Task 5: Legal Answer Validation using Few-Shot Multi-Choice QA. In Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024), pages 1309–1314, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
NLP at UC Santa Cruz at SemEval-2024 Task 5: Legal Answer Validation using Few-Shot Multi-Choice QA (Pahilajani et al., SemEval 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-checklist/2024.semeval-1.189.pdf
Supplementary material:
 2024.semeval-1.189.SupplementaryMaterial.txt
Supplementary material:
 2024.semeval-1.189.SupplementaryMaterial.zip