JuriBERT: A Masked-Language Model Adaptation for French Legal Text

Stella Douka, Hadi Abdine, Michalis Vazirgiannis, Rajaa El Hamdani, David Restrepo Amariles


Abstract
Language models have proven to be very useful when adapted to specific domains. Nonetheless, little research has been done on the adaptation of domain-specific BERT models in the French language. In this paper, we focus on creating a language model adapted to French legal text with the goal of helping law professionals. We conclude that some specific tasks do not benefit from generic language models pre-trained on large amounts of data. We explore the use of smaller architectures in domain-specific sub-languages and their benefits for French legal text. We prove that domain-specific pre-trained models can perform better than their equivalent generalised ones in the legal domain. Finally, we release JuriBERT, a new set of BERT models adapted to the French legal domain.
Anthology ID:
2021.nllp-1.9
Volume:
Proceedings of the Natural Legal Language Processing Workshop 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Nikolaos Aletras, Ion Androutsopoulos, Leslie Barrett, Catalina Goanta, Daniel Preotiuc-Pietro
Venue:
NLLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
95–101
Language:
URL:
https://aclanthology.org/2021.nllp-1.9
DOI:
10.18653/v1/2021.nllp-1.9
Bibkey:
Cite (ACL):
Stella Douka, Hadi Abdine, Michalis Vazirgiannis, Rajaa El Hamdani, and David Restrepo Amariles. 2021. JuriBERT: A Masked-Language Model Adaptation for French Legal Text. In Proceedings of the Natural Legal Language Processing Workshop 2021, pages 95–101, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
JuriBERT: A Masked-Language Model Adaptation for French Legal Text (Douka et al., NLLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2021.nllp-1.9.pdf