Mixed-domain Language Modeling for Processing Long Legal Documents

Wenyue Hua, Yuchen Zhang, Zhe Chen, Josie Li, Melanie Weber


Abstract
The application of Natural Language Processing (NLP) to specialized domains, such as the law, has recently received a surge of interest. As many legal services rely on processing and analyzing large collections of documents, automating such tasks with NLP tools such as language models emerges as a key challenge since legal documents may contain specialized vocabulary from other domains, such as medical terminology in personal injury text. However, most language models are general-purpose models, which either have limited reasoning capabilities on highly specialized legal terminology and syntax, such as BERT or ROBERTA, or are expensive to run and tune, such as GPT-3.5 and Claude. Thus, in this paper, we propose a specialized language model for personal injury text, LEGALRELECTRA, which is trained on mixed-domain legal and medical corpora. We show that as a small language model, our model improves over general-domain and single-domain medical and legal language models when processing mixed-domain (personal injury) text. Our training architecture implements the ELECTRA framework but utilizes REFORMER instead of BERT for its generator and discriminator. We show that this improves the model’s performance on processing long passages and results in better long-range text comprehension.
Anthology ID:
2023.nllp-1.7
Volume:
Proceedings of the Natural Legal Language Processing Workshop 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Daniel Preoțiuc-Pietro, Catalina Goanta, Ilias Chalkidis, Leslie Barrett, Gerasimos (Jerry) Spanakis, Nikolaos Aletras
Venues:
NLLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
51–61
Language:
URL:
https://aclanthology.org/2023.nllp-1.7
DOI:
10.18653/v1/2023.nllp-1.7
Bibkey:
Cite (ACL):
Wenyue Hua, Yuchen Zhang, Zhe Chen, Josie Li, and Melanie Weber. 2023. Mixed-domain Language Modeling for Processing Long Legal Documents. In Proceedings of the Natural Legal Language Processing Workshop 2023, pages 51–61, Singapore. Association for Computational Linguistics.
Cite (Informal):
Mixed-domain Language Modeling for Processing Long Legal Documents (Hua et al., NLLP-WS 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2023.nllp-1.7.pdf