Anaëlle Baledent


Validity, Agreement, Consensuality and Annotated Data Quality
Anaëlle Baledent | Yann Mathet | Antoine Widlöcher | Christophe Couronne | Jean-Luc Manguin
Proceedings of the Thirteenth Language Resources and Evaluation Conference

Reference annotated (or gold-standard) datasets are required for various common tasks such as training for machine learning systems or system validation. They are necessary to analyse or compare occurrences or items annotated by experts, or to compare objects resulting from any computational process to objects annotated by experts. But, even if reference annotated gold-standard corpora are required, their production is known as a difficult problem, from both a theoretical and practical point of view. Many studies devoted to theses issues conclude that multi-annotation is most of the time a necessity. That inter-annotator agreement measure, which is required to check the reliability of data and the reproducibility of an annotation task, and thus to establish a gold standard, is another thorny problem. Fine analysis of available metrics for this specific task then becomes essential. Our work is part of this effort and more precisely focuses on several problems, which are rarely discussed, although they are intrinsically linked with the interpretation of metrics. In particular, we focus here on the complex relations between agreement and reference (of which agreement among annotators is supposed to be an indicator), and the emergence of consensus. We also introduce the notion of consensuality as another relevant indicator.

Impact des modalités induites par les outils d’annotation manuelle : exemple de la détection des erreurs de français (Impact of modalities induced by manual annotation tools : example of French error detection)
Anaëlle Baledent
Actes de la 29e Conférence sur le Traitement Automatique des Langues Naturelles. Volume 2 : 24e Rencontres Etudiants Chercheurs en Informatique pour le TAL (RECITAL)

Certains choix effectués lors de la construction d’une campagne d’annotation peuvent avoir des conséquences sur les annotations produites. En menant une campagne sur la détection des erreurs de français, aux paramètres maîtrisés, nous évaluons notamment l’effet de la fonctionnalité de retour arrière. Au moyen de paires d’énoncés presque identiques, nous mettons en exergue une tendance des annotateurs à tenir compte de l’un pour annoter l’autre.


Dating Ancient texts: an Approach for Noisy French Documents
Anaëlle Baledent | Nicolas Hiebel | Gaël Lejeune
Proceedings of LT4HALA 2020 - 1st Workshop on Language Technologies for Historical and Ancient Languages

Automatic dating of ancient documents is a very important area of research for digital humanities applications. Many documents available via digital libraries do not have any dating or dating that is uncertain. Document dating is not only useful by itself but it also helps to choose the appropriate NLP tools (lemmatizer, POS tagger ) for subsequent analysis. This paper provides a dataset with thousands of ancient documents in French and present methods and evaluation metrics for this task. We compare character-level methods with token-level methods on two different datasets of two different time periods and two different text genres. Our results show that character-level models are more robust to noise than classical token-level models. The experiments presented in this article focused on documents written in French but we believe that the ability of character-level models to handle noise properly would help to achieve comparable results on other languages and more ancient languages in particular.