John Pougué-Biyong
Also published as: John Pougué Biyong
2023
EconBERTa: Towards Robust Extraction of Named Entities in Economics
Karim Lasri
|
Pedro Vitor Quinta de Castro
|
Mona Schirmer
|
Luis Eduardo San Martin
|
Linxi Wang
|
Tomáš Dulka
|
Haaya Naushan
|
John Pougué-Biyong
|
Arianna Legovini
|
Samuel Fraiberger
Findings of the Association for Computational Linguistics: EMNLP 2023
Adapting general-purpose language models has proven to be effective in tackling downstream tasks within specific domains. In this paper, we address the task of extracting entities from the economics literature on impact evaluation. To this end, we release EconBERTa, a large language model pretrained on scientific publications in economics, and ECON-IE, a new expert-annotated dataset of economics abstracts for Named Entity Recognition (NER). We find that EconBERTa reaches state-of-the-art performance on our downstream NER task. Additionally, we extensively analyze the model’s generalization capacities, finding that most errors correspond to detecting only a subspan of an entity or failure to extrapolate to longer sequences. This limitation is primarily due to an inability to detect part-of-speech sequences unseen during training, and this effect diminishes when the number of unique instances in the training set increases. Examining the generalization abilities of domain-specific language models paves the way towards improving the robustness of NER models for causal knowledge extraction.
2020
Information Extraction from Swedish Medical Prescriptions with Sig-Transformer Encoder
John Pougué Biyong
|
Bo Wang
|
Terry Lyons
|
Alejo Nevado-Holgado
Proceedings of the 3rd Clinical Natural Language Processing Workshop
Relying on large pretrained language models such as Bidirectional Encoder Representations from Transformers (BERT) for encoding and adding a simple prediction layer has led to impressive performance in many clinical natural language processing (NLP) tasks. In this work, we present a novel extension to the Transformer architecture, by incorporating signature transform with the self-attention model. This architecture is added between embedding and prediction layers. Experiments on a new Swedish prescription data show the proposed architecture to be superior in two of the three information extraction tasks, comparing to baseline models. Finally, we evaluate two different embedding approaches between applying Multilingual BERT and translating the Swedish text to English then encode with a BERT model pretrained on clinical notes.
Search