2022
pdf
abs
Enhancing the PARSEME Turkish Corpus of Verbal Multiword Expressions
Yagmur Ozturk
|
Najet Hadj Mohamed
|
Adam Lion-Bouton
|
Agata Savary
Proceedings of the 18th Workshop on Multiword Expressions @LREC2022
The PARSEME (Parsing and Multiword Expressions) project proposes multilingual corpora annotated for multiword expressions (MWEs). In this case study, we focus on the Turkish corpus of PARSEME. Turkish is an agglutinative language and shows high inflection and derivation in word forms. This can cause some issues in terms of automatic morphosyntactic annotation. We provide an overview of the problems observed in the morphosyntactic annotation of the Turkish PARSEME corpus. These issues are mostly observed on the lemmas, which is important for the approximation of a type of an MWE. We propose modifications of the original corpus with some enhancements on the lemmas and parts of speech. The enhancements are then evaluated with an identification system from the PARSEME Shared Task 1.2 to detect MWEs, namely Seen2Seen. Results show increase in the F-measure for MWE identification, emphasizing the necessity of robust morphosyntactic annotation for MWE processing, especially for languages that show high surface variability.
pdf
abs
Evaluating Diversity of Multiword Expressions in Annotated Text
Adam Lion-Bouton
|
Yagmur Ozturk
|
Agata Savary
|
Jean-Yves Antoine
Proceedings of the 29th International Conference on Computational Linguistics
Diversity can be decomposed into three distinct concepts, namely: variety, balance and disparity. This paper borrows from the extensive formalization and measures of diversity developed in ecology in order to evaluate the variety and balance of multiword expression annotation produced by automatic annotation systems. The measures of richness, normalized richness, and two variations of Hill’s evenness are considered in this paper. We observe how these measures behave against increasingly smaller samples of gold annotations of multiword expressions and use their comportment to validate or invalidate their pertinence for multiword expressions in annotated texts. We apply the validated measures to annotations in 14 languages produced by systems during the PARSEME shared task on automatic identification of multiword expressions and on the gold versions of the corpora. We also explore the limits of such evaluation by studying the impact of lemmatization errors in the Turkish corpus used in the shared task.
2020
pdf
bib
abs
Comment arpenter sans mètre : les scores de résolution de chaînes de coréférences sont-ils des métriques ? (Do the standard scores of evaluation of coreference resolution constitute metrics ?)
Adam Lion-Bouton
|
Loïc Grobol
|
Jean-Yves Antoine
|
Sylvie Billot
|
Anaïs Lefeuvre-Halftermeyer
Actes de la 6e conférence conjointe Journées d'Études sur la Parole (JEP, 33e édition), Traitement Automatique des Langues Naturelles (TALN, 27e édition), Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RÉCITAL, 22e édition). 2e atelier Éthique et TRaitemeNt Automatique des Langues (ETeRNAL)
Cet article présente un travail qui consiste à étudier si les scores les plus utilisés pour l’évaluation de la résolution des coréférences constituent des métriques de similarité normalisées. En adoptant une démarche purement expérimentale, nous avons vérifié si les scores MUC, B3 , CEAF, BLANC, LEA et le meta-score CoNLL respectent les bonnes propriétés qui définissent une telle métrique. Notre étude montre que seul le score CEAFm est potentiellement une métrique de similarité normalisée.