Abstract
The growth of pending legal cases in populouscountries, such as India, has become a major is-sue. Developing effective techniques to processand understand legal documents is extremelyuseful in resolving this problem. In this pa-per, we present our systems for SemEval-2023Task 6: understanding legal texts (Modi et al., 2023). Specifically, we first develop the Legal-BERT-HSLN model that considers the com-prehensive context information in both intra-and inter-sentence levels to predict rhetoricalroles (subtask A) and then train a Legal-LUKEmodel, which is legal-contextualized and entity-aware, to recognize legal entities (subtask B).Our evaluations demonstrate that our designedmodels are more accurate than baselines, e.g.,with an up to 15.0% better F1 score in subtaskB. We achieved notable performance in the taskleaderboard, e.g., 0.834 micro F1 score, andranked No.5 out of 27 teams in subtask A.- Anthology ID:
- 2023.semeval-1.72
- Volume:
- Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)
- Month:
- July
- Year:
- 2023
- Address:
- Toronto, Canada
- Venue:
- SemEval
- SIG:
- SIGLEX
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 517–525
- Language:
- URL:
- https://aclanthology.org/2023.semeval-1.72
- DOI:
- 10.18653/v1/2023.semeval-1.72
- Cite (ACL):
- Xin Jin and Yuchen Wang. 2023. TeamShakespeare at SemEval-2023 Task 6: Understand Legal Documents with Contextualized Large Language Models. In Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023), pages 517–525, Toronto, Canada. Association for Computational Linguistics.
- Cite (Informal):
- TeamShakespeare at SemEval-2023 Task 6: Understand Legal Documents with Contextualized Large Language Models (Jin & Wang, SemEval 2023)
- PDF:
- https://preview.aclanthology.org/remove-xml-comments/2023.semeval-1.72.pdf