Contemporary LLMs struggle with extracting formal legal arguments

Lena Held, Ivan Habernal


Abstract
Legal Argument Mining (LAM) is a complex challenge for humans and language models alike. This paper explores the application of Large Language Models (LLMs) in LAM, focusing on the identification of fine-grained argument types within judgment texts. We compare the performance of Flan-T5 and Llama 3 models against a baseline RoBERTa model to study if the advantages of magnitude-bigger LLMs can be leveraged for this task. Our study investigates the effectiveness of fine-tuning and prompting strategies in enhancing the models’ ability to discern nuanced argument types. Despite employing state-of-the-art techniques, our findings indicate that neither fine-tuning nor prompting could surpass the performance of a domain-pre-trained encoder-only model. This highlights the challenges and limitations in adapting general-purpose large language models to the specialized domain of legal argumentation. The insights gained from this research contribute to the ongoing discourse on optimizing NLP models for complex, domain-specific tasks. Our code and data for reproducibility are available at https://github.com/trusthlt/legal-argument-spans.
Anthology ID:
2025.nllp-1.20
Volume:
Proceedings of the Natural Legal Language Processing Workshop 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Nikolaos Aletras, Ilias Chalkidis, Leslie Barrett, Cătălina Goanță, Daniel Preoțiuc-Pietro, Gerasimos Spanakis
Venues:
NLLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
292–303
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.nllp-1.20/
DOI:
Bibkey:
Cite (ACL):
Lena Held and Ivan Habernal. 2025. Contemporary LLMs struggle with extracting formal legal arguments. In Proceedings of the Natural Legal Language Processing Workshop 2025, pages 292–303, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Contemporary LLMs struggle with extracting formal legal arguments (Held & Habernal, NLLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.nllp-1.20.pdf