@inproceedings{jin-wang-2023-teamshakespeare,
    title = "{T}eam{S}hakespeare at {S}em{E}val-2023 Task 6: Understand Legal Documents with Contextualized Large Language Models",
    author = "Jin, Xin  and
      Wang, Yuchen",
    editor = {Ojha, Atul Kr.  and
      Do{\u{g}}ru{\"o}z, A. Seza  and
      Da San Martino, Giovanni  and
      Tayyar Madabushi, Harish  and
      Kumar, Ritesh  and
      Sartori, Elisa},
    booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2023.semeval-1.72/",
    doi = "10.18653/v1/2023.semeval-1.72",
    pages = "517--525",
    abstract = "The growth of pending legal cases in populouscountries, such as India, has become a major is-sue. Developing effective techniques to processand understand legal documents is extremelyuseful in resolving this problem. In this pa-per, we present our systems for SemEval-2023Task 6: understanding legal texts (Modi et al., 2023). Specifically, we first develop the Legal-BERT-HSLN model that considers the com-prehensive context information in both intra-and inter-sentence levels to predict rhetoricalroles (subtask A) and then train a Legal-LUKEmodel, which is legal-contextualized and entity-aware, to recognize legal entities (subtask B).Our evaluations demonstrate that our designedmodels are more accurate than baselines, e.g.,with an up to 15.0{\%} better F1 score in subtaskB. We achieved notable performance in the taskleaderboard, e.g., 0.834 micro F1 score, andranked No.5 out of 27 teams in subtask A."
}Markdown (Informal)
[TeamShakespeare at SemEval-2023 Task 6: Understand Legal Documents with Contextualized Large Language Models](https://preview.aclanthology.org/ingest-emnlp/2023.semeval-1.72/) (Jin & Wang, SemEval 2023)
ACL