Léo Jacqmin


2023

pdf
„Mann“ is to “Donna” as「国王」is to « Reine » Adapting the Analogy Task for Multilingual and Contextual Embeddings
Timothee Mickus | Eduardo Calò | Léo Jacqmin | Denis Paperno | Mathieu Constant
Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)

How does the word analogy task fit in the modern NLP landscape? Given the rarity of comparable multilingual benchmarks and the lack of a consensual evaluation protocol for contextual models, this remains an open question. In this paper, we introduce MATS: a multilingual analogy dataset, covering forty analogical relations in six languages, and evaluate human as well as static and contextual embedding performances on the task. We find that not all analogical relations are equally straightforward for humans, static models remain competitive with contextual embeddings, and optimal settings vary across languages and analogical relations. Several key challenges remain, including creating benchmarks that align with human reasoning and understanding what drives differences across methodologies.

pdf
OLISIA: a Cascade System for Spoken Dialogue State Tracking
Léo Jacqmin | Lucas Druart | Yannick Estève | Benoît Favre | Lina M Rojas | Valentin Vielzeuf
Proceedings of The Eleventh Dialog System Technology Challenge

Though Dialogue State Tracking (DST) is a core component of spoken dialogue systems, recent work on this task mostly deals with chat corpora, disregarding the discrepancies between spoken and written language. In this paper, we propose OLISIA, a cascade system which integrates an Automatic Speech Recognition (ASR) model and a DST model. We introduce several adaptations in the ASR and DST modules to improve integration and robustness to spoken conversations. With these adaptations, our system ranked first in DSTC11 Track 3, a benchmark to evaluate spoken DST. We conduct an in-depth analysis of the results and find that normalizing the ASR outputs and adapting the DST inputs through data augmentation, along with increasing the pre-trained models size all play an important role in reducing the performance discrepancy between written and spoken conversations.

2022

pdf
“Do you follow me?”: A Survey of Recent Approaches in Dialogue State Tracking
Léo Jacqmin | Lina M. Rojas Barahona | Benoit Favre
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue

While communicating with a user, a task-oriented dialogue system has to track the user’s needs at each turn according to the conversation history. This process called dialogue state tracking (DST) is crucial because it directly informs the downstream dialogue policy. DST has received a lot of interest in recent years with the text-to-text paradigm emerging as the favored approach. In this review paper, we first present the task and its associated datasets. Then, considering a large number of recent publications, we identify highlights and advances of research in 2021-2022. Although neural approaches have enabled significant progress, we argue that some critical aspects of dialogue systems such as generalizability are still underexplored. To motivate future studies, we propose several research avenues.

pdf bib
« Est-ce que tu me suis ? » : une revue du suivi de l’état du dialogue (“Do you follow me ?" : a review of dialogue state tracking )
Léo Jacqmin
Actes de la 29e Conférence sur le Traitement Automatique des Langues Naturelles. Volume 2 : 24e Rencontres Etudiants Chercheurs en Informatique pour le TAL (RECITAL)

Tout en communiquant avec un utilisateur, un système de dialogue orienté tâche doit suivre les besoins de l’utilisateur à chaque étape selon l’historique de la conversation. Ce procédé appelé suivi de l’état du dialogue est primordial car il informe directement les actions du système. Cet article présente dans un premier temps la tâche du suivi de l’état du dialogue, les jeux de données disponibles et les approches modernes. Ensuite, compte tenu du nombre important de publications des dernières années, il vise à recenser les point saillants et les avancées des recherches. Bien que les approches neuronales aient permis des progrès notables, nous argumentons que certains aspects critiques liés aux systèmes de dialogue sont encore trop peu explorés. Pour motiver de futures études, plusieurs pistes de recherche sont proposées.

2021

pdf
GECko+: a Grammatical and Discourse Error Correction Tool
Eduardo Calò | Léo Jacqmin | Thibo Rosemplatt | Maxime Amblard | Miguel Couceiro | Ajinkya Kulkarni
Actes de la 28e Conférence sur le Traitement Automatique des Langues Naturelles. Volume 3 : Démonstrations

GECko+ : a Grammatical and Discourse Error Correction Tool We introduce GECko+, a web-based writing assistance tool for English that corrects errors both at the sentence and at the discourse level. It is based on two state-of-the-art models for grammar error correction and sentence ordering. GECko+ is available online as a web application that implements a pipeline combining the two models.

pdf
SpanAlign: Efficient Sequence Tagging Annotation Projection into Translated Data applied to Cross-Lingual Opinion Mining
Léo Jacqmin | Gabriel Marzinotto | Justyna Gromada | Ewelina Szczekocka | Robert Kołodyński | Géraldine Damnati
Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)

Following the increasing performance of neural machine translation systems, the paradigm of using automatically translated data for cross-lingual adaptation is now studied in several applicative domains. The capacity to accurately project annotations remains however an issue for sequence tagging tasks where annotation must be projected with correct spans. Additionally, when the task implies noisy user-generated text, the quality of translation and annotation projection can be affected. In this paper we propose to tackle multilingual sequence tagging with a new span alignment method and apply it to opinion target extraction from customer reviews. We show that provided suitable heuristics, translated data with automatic span-level annotation projection can yield improvements both for cross-lingual adaptation compared to zero-shot transfer, and data augmentation compared to a multilingual baseline.