2020
pdf
bib
abs
Contextualized French Language Models for Biomedical Named Entity Recognition
Jenny Copara
|
Julien Knafou
|
Nona Naderi
|
Claudia Moro
|
Patrick Ruch
|
Douglas Teodoro
Actes de la 6e conférence conjointe Journées d'Études sur la Parole (JEP, 33e édition), Traitement Automatique des Langues Naturelles (TALN, 27e édition), Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RÉCITAL, 22e édition). Atelier DÉfi Fouille de Textes
Named entity recognition (NER) is key for biomedical applications as it allows knowledge discovery in free text data. As entities are semantic phrases, their meaning is conditioned to the context to avoid ambiguity. In this work, we explore contextualized language models for NER in French biomedical text as part of the Défi Fouille de Textes challenge. Our best approach achieved an F1 -measure of 66% for symptoms and signs, and pathology categories, being top 1 for subtask 1. For anatomy, dose, exam, mode, moment, substance, treatment, and value categories, it achieved an F1 -measure of 75% (subtask 2). If considered all categories, our model achieved the best result in the challenge, with an F1 -measure of 72%. The use of an ensemble of neural language models proved to be very effective, improving a CRF baseline by up to 28% and a single specialised language model by 4%.
pdf
bib
abs
BiTeM at WNUT 2020 Shared Task-1: Named Entity Recognition over Wet Lab Protocols using an Ensemble of Contextual Language Models
Julien Knafou
|
Nona Naderi
|
Jenny Copara
|
Douglas Teodoro
|
Patrick Ruch
Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)
Recent improvements in machine-reading technologies attracted much attention to automation problems and their possibilities. In this context, WNUT 2020 introduces a Name Entity Recognition (NER) task based on wet laboratory procedures. In this paper, we present a 3-step method based on deep neural language models that reported the best overall exact match F1-score (77.99%) of the competition. By fine-tuning 10 times, 10 different pretrained language models, this work shows the advantage of having more models in an ensemble based on a majority of votes strategy. On top of that, having 100 different models allowed us to analyse the combinations of ensemble that demonstrated the impact of having multiple pretrained models versus fine-tuning a pretrained model multiple times.
2018
pdf
bib
abs
Using context to identify the language of face-saving
Nona Naderi
|
Graeme Hirst
Proceedings of the 5th Workshop on Argument Mining
We created a corpus of utterances that attempt to save face from parliamentary debates and use it to automatically analyze the language of reputation defence. Our proposed model that incorporates information regarding threats to reputation can predict reputation defence language with high confidence. Further experiments and evaluations on different datasets show that the model is able to generalize to new utterances and can predict the language of reputation defence in a new dataset.
pdf
bib
abs
Automated Fact-Checking of Claims in Argumentative Parliamentary Debates
Nona Naderi
|
Graeme Hirst
Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)
We present an automated approach to distinguish true, false, stretch, and dodge statements in questions and answers in the Canadian Parliament. We leverage the truthfulness annotations of a U.S. fact-checking corpus by training a neural net model and incorporating the prediction probabilities into our models. We find that in concert with other linguistic features, these probabilities can improve the multi-class classification results. We further show that dodge statements can be detected with an F1 measure as high as 82.57% in binary classification settings.
2017
pdf
bib
abs
Argumentation Quality Assessment: Theory vs. Practice
Henning Wachsmuth
|
Nona Naderi
|
Ivan Habernal
|
Yufang Hou
|
Graeme Hirst
|
Iryna Gurevych
|
Benno Stein
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Argumentation quality is viewed differently in argumentation theory and in practical assessment approaches. This paper studies to what extent the views match empirically. We find that most observations on quality phrased spontaneously are in fact adequately represented by theory. Even more, relative comparisons of arguments in practice correlate with absolute quality ratings based on theory. Our results clarify how the two views can learn from each other.
pdf
bib
abs
Recognizing Reputation Defence Strategies in Critical Political Exchanges
Nona Naderi
|
Graeme Hirst
Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017
We propose a new task of automatically detecting reputation defence strategies in the field of computational argumentation. We cast the problem as relation classification, where given a pair of reputation threat and reputation defence, we determine the reputation defence strategy. We annotate a dataset of parliamentary questions and answers with reputation defence strategies. We then propose a model based on supervised learning to address the detection of these strategies, and report promising experimental results.
pdf
bib
abs
Classifying Frames at the Sentence Level in News Articles
Nona Naderi
|
Graeme Hirst
Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017
Previous approaches to generic frame classification analyze frames at the document level. Here, we propose a supervised based approach based on deep neural networks and distributional representations for classifying frames at the sentence level in news articles. We conduct our experiments on the publicly available Media Frames Corpus compiled from the U.S. Newspapers. Using (B)LSTMs and GRU networks to represent the meaning of frames, we demonstrate that our approach yields at least 14-point improvement over several baseline methods.
pdf
bib
abs
Computational Argumentation Quality Assessment in Natural Language
Henning Wachsmuth
|
Nona Naderi
|
Yufang Hou
|
Yonatan Bilu
|
Vinodkumar Prabhakaran
|
Tim Alberdingk Thijm
|
Graeme Hirst
|
Benno Stein
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
Research on computational argumentation faces the problem of how to automatically assess the quality of an argument or argumentation. While different quality dimensions have been approached in natural language processing, a common understanding of argumentation quality is still missing. This paper presents the first holistic work on computational argumentation quality in natural language. We comprehensively survey the diverse existing theories and approaches to assess logical, rhetorical, and dialectical quality dimensions, and we derive a systematic taxonomy from these. In addition, we provide a corpus with 320 arguments, annotated for all 15 dimensions in the taxonomy. Our results establish a common ground for research on computational argumentation quality assessment.
2010
pdf
bib
Ontology-Based Extraction and Summarization of Protein Mutation Impact Information
Nona Naderi
|
René Witte
Proceedings of the 2010 Workshop on Biomedical Natural Language Processing