Pedro Vitor Quinta de Castro
Also published as: Pedro Vitor Quinta de Castro
2025
Labor Lex: A New Portuguese Corpus and Pipeline for Information Extraction in Brazilian Legal Texts
Pedro Vitor Quinta de Castro
|
Nádia Félix Felipe Da Silva
Proceedings of the Natural Legal Language Processing Workshop 2025
Relation Extraction (RE) is a challenging Natural Language Processing task that involves identifying named entities from text and classifying the relationships between them. When applied to a specific domain, the task acquires a new layer of complexity, handling the lexicon and context particular to the domain in question. In this work, this task is applied to the Legal domain, specifically targeting Brazilian Labor Law. Architectures based on Deep Learning, with word representations derived from Transformer Language Models (LM), have shown state-of-the-art performance for the RE task. Recent works on this task handle Named Entity Recognition (NER) and RE either as a single joint model or as a pipelined approach. In this work, we introduce Labor Lex, a newly constructed corpus based on public documents from Brazilian Labor Courts. We also present a pipeline of models trained on it. Different experiments are conducted for each task, comparing supervised training using LMs and In-Context Learning (ICL) with Large Language Models (LLM), and verifying and analyzing the results for each one. For the NER task, the best achieved result was 89.97% F1-Score, and for the RE task, the best result was 82.38% F1-Score. The best results for both tasks were obtained using the supervised training approach.
2023
EconBERTa: Towards Robust Extraction of Named Entities in Economics
Karim Lasri
|
Pedro Vitor Quinta de Castro
|
Mona Schirmer
|
Luis Eduardo San Martin
|
Linxi Wang
|
Tomáš Dulka
|
Haaya Naushan
|
John Pougué-Biyong
|
Arianna Legovini
|
Samuel Fraiberger
Findings of the Association for Computational Linguistics: EMNLP 2023
Adapting general-purpose language models has proven to be effective in tackling downstream tasks within specific domains. In this paper, we address the task of extracting entities from the economics literature on impact evaluation. To this end, we release EconBERTa, a large language model pretrained on scientific publications in economics, and ECON-IE, a new expert-annotated dataset of economics abstracts for Named Entity Recognition (NER). We find that EconBERTa reaches state-of-the-art performance on our downstream NER task. Additionally, we extensively analyze the model’s generalization capacities, finding that most errors correspond to detecting only a subspan of an entity or failure to extrapolate to longer sequences. This limitation is primarily due to an inability to detect part-of-speech sequences unseen during training, and this effect diminishes when the number of unique instances in the training set increases. Examining the generalization abilities of domain-specific language models paves the way towards improving the robustness of NER models for causal knowledge extraction.
Search
Fix author
Co-authors
- Nádia Félix Felipe Da Silva 1
- Tomáš Dulka 1
- Samuel Fraiberger 1
- Karim Lasri 1
- Arianna Legovini 1
- show all...