Lorraine Goeuriot
2026
Is Biomedical Specialization Still Worth It? Insights from Domain-Adaptive Language Modelling with a New French Health Corpus
Aidan Mannion | Cécile Macaire | Armand Violle | Stéphane Ohayon | Xavier Tannier | Didier Schwab | Lorraine Goeuriot | François Portet
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Aidan Mannion | Cécile Macaire | Armand Violle | Stéphane Ohayon | Xavier Tannier | Didier Schwab | Lorraine Goeuriot | François Portet
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Large language models (LLMs) have demonstrated remarkable capabilities across diverse domains, yet their adaptation to specialized fields remains challenging, particularly for non-English languages. This study investigates domain-adaptive pre-training (DAPT) as a strategy for specializing small to mid-sized LLMs in the French biomedical domain through continued pre-training. We address two key research questions: the viability of specialized continued pre-training for domain adaptation and the relationship between domain-specific performance gains and general capability degradation. Our contributions include the release of a fully open-licensed French biomedical corpus suitable for commercial and open-source applications, the training and release of specialized French biomedical LLMs, and novel insights for DAPT implementation. Our methodology encompasses the collection and refinement of high-quality French biomedical texts, the exploration of causal language modeling approaches using DAPT, and conducting extensive comparative evaluations. Our results cast doubt on the efficacy of DAPT, in contrast to previous works, but we highlight its viability in smaller-scale, resource-constrained scenarios under the right conditions. Our findings further suggest that model merging post-DAPT is essential to mitigate generalization trade-offs, and in some cases even improves performance on specialized tasks at which the DAPT was directed.
BenCSSmark: Making the Social Sciences Count in LLM Research
Arnault Chatelain | Etienne Ollion | Qianwen Guan | Diandra Fabre | Lorraine Goeuriot | Emile Chapuis | Abdelkrim Beloued | Marie Candito | Nicolas Hervé | Didier Schwab
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Arnault Chatelain | Etienne Ollion | Qianwen Guan | Diandra Fabre | Lorraine Goeuriot | Emile Chapuis | Abdelkrim Beloued | Marie Candito | Nicolas Hervé | Didier Schwab
Proceedings of the Fifteenth Language Resources and Evaluation Conference
This position paper argues that the under-representation of social science tasks in contemporary LLM benchmarks limits advances in both LLM evaluation and social scientific inquiry. Benchmarks — standardized tools for assessing computational systems — are pivotal in the development of artificial intelligence (AI), including large language models (LLMs). Benchmarks do more than measure progress — they actively structure it, shaping reputations, research agendas, and commercial outcomes. Despite this central role, the social sciences are largely absent from mainstream evaluation frameworks, even though scholars in these fields generate dozens of rigorously annotated, context-sensitive datasets each year. Integrating this work into benchmark design could significantly improve the generalization and robustness of AI models. In turn, models trained on social scientific tasks would likely yield better performance on classic and contemporary tasks in disciplines as diverse as history, sociology, political science or economics. This is all the more pressing as these disciplines are quickly turning to LLMs for assistance. To address this gap, we introduce BenCSSmark, a benchmark composed of datasets annotated by computational social scientists. By integrating social scientific perspectives into benchmarking, BenCSSmark seeks to promote more robust, transparent, and socially relevant AI systems and to foster efficient collaboration.
Pantagruel: Unified Self-Supervised Encoders for French Text and Speech
Phuong-Hang Le | Valentin Pelloin | Arnault Chatelain | Maryem Bouziane | Mohammed Ghennai | Qianwen Guan | Kirill Milintsevich | Salima Mdhaffar | Aidan Mannion | Nils Defauw | Shuyue Gu | Alexandre Daniel Audibert | Marco Dinarelli | Yannick Estève | Lorraine Goeuriot | Steffen Lalande | Nicolas Hervé | Maximin Coavoux | François Portet | Étienne Ollion | Marie Candito | Maxime Peyrard | Solange Rossato | Benjamin Lecouteux | Aurélie Nardy | Gilles Sérasset | Vincent Segonne | Solène Evain | Diandra Fabre | Didier Schwab
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Phuong-Hang Le | Valentin Pelloin | Arnault Chatelain | Maryem Bouziane | Mohammed Ghennai | Qianwen Guan | Kirill Milintsevich | Salima Mdhaffar | Aidan Mannion | Nils Defauw | Shuyue Gu | Alexandre Daniel Audibert | Marco Dinarelli | Yannick Estève | Lorraine Goeuriot | Steffen Lalande | Nicolas Hervé | Maximin Coavoux | François Portet | Étienne Ollion | Marie Candito | Maxime Peyrard | Solange Rossato | Benjamin Lecouteux | Aurélie Nardy | Gilles Sérasset | Vincent Segonne | Solène Evain | Diandra Fabre | Didier Schwab
Proceedings of the Fifteenth Language Resources and Evaluation Conference
We release Pantagruel models, a new family of self-supervised encoder models for French text and speech. Instead of predicting modality-tailored targets such as textual tokens or speech units, Pantagruel learns contextualized target representations in the feature space, allowing modality-specific encoders to capture linguistic and acoustic regularities more effectively. Separate models are pre-trained on large-scale French corpora, including Wikipedia, OSCAR and CroissantLLM for text, together with MultilingualLibriSpeech, LeBenchmark, and INA-100k for speech. INA-100k is a newly introduced 100,000-hour corpus of French audio derived from the archives of the Institut National de l’Audiovisuel (INA), the national repository of French radio and television broadcasts, providing highly diverse audio data. We evaluate Pantagruel across a broad range of downstream tasks spanning both modalities, including those from the standard French benchmarks such as FLUE or LeBenchmark. Across these tasks, Pantagruel models show competitive or superior performance compared to strong French baselines such as CamemBERT, FlauBERT, and LeBenchmark2.0, while maintaining a shared architecture that can seamlessly handle either speech or text inputs. These results confirm the effectiveness of feature-space self-supervised objectives for French representation learning and highlight Pantagruel as a robust foundation for multimodal speech-text understanding.
2025
Vers des RAGs intégrant véracité, subjectivité et explicabilité
Alae Bouchiba | Adrian-Gabriel Chifu | Sébastien Fournier | Lorraine Goeuriot | Philippe Mulhem
Actes de l'atelier Intelligence Artificielle générative et ÉDUcation : Enjeux, Défis et Perspectives de Recherche 2025 (IA-ÉDU)
Alae Bouchiba | Adrian-Gabriel Chifu | Sébastien Fournier | Lorraine Goeuriot | Philippe Mulhem
Actes de l'atelier Intelligence Artificielle générative et ÉDUcation : Enjeux, Défis et Perspectives de Recherche 2025 (IA-ÉDU)
Cet article introduit X-RAG-VS , un cadre pour intégrer véracité , subjectivité et explicabilité dans les systèmes RAG , en réponse aux besoins éducatifs. À travers des cas d’usage et l’analyse de modèles existants , nous montrons que ces dimensions restent insuffisamment prises en compte. Nous proposons une approche unifiée pour des réponses plus fiables , nuancées et explicables.
2024
MedDialog-FR: A French Version of the MedDialog Corpus for Multi-label Classification and Response Generation Related to Women’s Intimate Health
Xingyu Liu | Vincent Segonne | Aidan Mannion | Didier Schwab | Lorraine Goeuriot | François Portet
Proceedings of the First Workshop on Patient-Oriented Language Processing (CL4Health) @ LREC-COLING 2024
Xingyu Liu | Vincent Segonne | Aidan Mannion | Didier Schwab | Lorraine Goeuriot | François Portet
Proceedings of the First Workshop on Patient-Oriented Language Processing (CL4Health) @ LREC-COLING 2024
This article presents MedDialog-FR, a large publicly available corpus of French medical conversations for the medical domain. Motivated by the lack of French dialogue corpora for data-driven dialogue systems and the paucity of available information related to women’s intimate health, we introduce an annotated corpus of question-and-answer dialogues between a real patient and a real doctor concerning women’s intimate health. The corpus is composed of about 20,000 dialogues automatically translated from the English version of MedDialog-EN. The corpus test set is composed of 1,400 dialogues that have been manually post-edited and annotated with 22 categories from the UMLS ontology. We also fine-tuned state-of-the-art reference models to automatically perform multi-label classification and response generation to give an initial performance benchmark and highlight the difficulty of the tasks.
Jargon: A Suite of Language Models and Evaluation Tasks for French Specialized Domains
Vincent Segonne | Aidan Mannion | Laura Cristina Alonzo Canul | Alexandre Daniel Audibert | Xingyu Liu | Cécile Macaire | Adrien Pupier | Yongxin Zhou | Mathilde Aguiar | Felix E. Herron | Magali Norré | Massih R Amini | Pierrette Bouillon | Iris Eshkol-Taravella | Emmanuelle Esperança-Rodier | Thomas François | Lorraine Goeuriot | Jérôme Goulian | Mathieu Lafourcade | Benjamin Lecouteux | François Portet | Fabien Ringeval | Vincent Vandeghinste | Maximin Coavoux | Marco Dinarelli | Didier Schwab
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Vincent Segonne | Aidan Mannion | Laura Cristina Alonzo Canul | Alexandre Daniel Audibert | Xingyu Liu | Cécile Macaire | Adrien Pupier | Yongxin Zhou | Mathilde Aguiar | Felix E. Herron | Magali Norré | Massih R Amini | Pierrette Bouillon | Iris Eshkol-Taravella | Emmanuelle Esperança-Rodier | Thomas François | Lorraine Goeuriot | Jérôme Goulian | Mathieu Lafourcade | Benjamin Lecouteux | François Portet | Fabien Ringeval | Vincent Vandeghinste | Maximin Coavoux | Marco Dinarelli | Didier Schwab
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Pretrained Language Models (PLMs) are the de facto backbone of most state-of-the-art NLP systems. In this paper, we introduce a family of domain-specific pretrained PLMs for French, focusing on three important domains: transcribed speech, medicine, and law. We use a transformer architecture based on efficient methods (LinFormer) to maximise their utility, since these domains often involve processing long documents. We evaluate and compare our models to state-of-the-art models on a diverse set of tasks and datasets, some of which are introduced in this paper. We gather the datasets into a new French-language evaluation benchmark for these three domains. We also compare various training configurations: continued pretraining, pretraining from scratch, as well as single- and multi-domain pretraining. Extensive domain-specific experiments show that it is possible to attain competitive downstream performance even when pre-training with the approximative LinFormer attention mechanism. For full reproducibility, we release the models and pretraining data, as well as contributed datasets.
Jargon : Une suite de modèles de langues et de référentiels d’évaluation pour les domaines spécialisés du français
Vincent Segonne | Aidan Mannion | Laura Cristina Alonzo Canul | Alexandre Audibert | Xingyu Liu | Cécile Macaire | Adrien Pupier | Yongxin Zhou | Mathilde Aguiar | Felix Herron | Magali Norré | Massih-Reza Amini | Pierrette Bouillon | Iris Eshkol-Taravella | Emmanuelle Esparança-Rodier | Thomas François | Lorraine Goeuriot | Jérôme Goulian | Mathieu Lafourcade | Benjamin Lecouteux | François Portet | Fabien Ringeval | Vincent Vandeghinste | Maximin Coavoux | Marco Dinarelli | Didier Schwab
Actes de la 31ème Conférence sur le Traitement Automatique des Langues Naturelles, volume 2 : traductions d'articles publiès
Vincent Segonne | Aidan Mannion | Laura Cristina Alonzo Canul | Alexandre Audibert | Xingyu Liu | Cécile Macaire | Adrien Pupier | Yongxin Zhou | Mathilde Aguiar | Felix Herron | Magali Norré | Massih-Reza Amini | Pierrette Bouillon | Iris Eshkol-Taravella | Emmanuelle Esparança-Rodier | Thomas François | Lorraine Goeuriot | Jérôme Goulian | Mathieu Lafourcade | Benjamin Lecouteux | François Portet | Fabien Ringeval | Vincent Vandeghinste | Maximin Coavoux | Marco Dinarelli | Didier Schwab
Actes de la 31ème Conférence sur le Traitement Automatique des Langues Naturelles, volume 2 : traductions d'articles publiès
Les modèles de langue préentraînés (PLM) constituent aujourd’hui de facto l’épine dorsale de la plupart des systèmes de traitement automatique des langues. Dans cet article, nous présentons Jargon, une famille de PLMs pour des domaines spécialisés du français, en nous focalisant sur trois domaines : la parole transcrite, le domaine clinique / biomédical, et le domaine juridique. Nous utilisons une architecture de transformeur basée sur des méthodes computationnellement efficaces(LinFormer) puisque ces domaines impliquent souvent le traitement de longs documents. Nous évaluons et comparons nos modèles à des modèles de l’état de l’art sur un ensemble varié de tâches et de corpus d’évaluation, dont certains sont introduits dans notre article. Nous rassemblons les jeux de données dans un nouveau référentiel d’évaluation en langue française pour ces trois domaines. Nous comparons également diverses configurations d’entraînement : préentraînement prolongé en apprentissage autosupervisé sur les données spécialisées, préentraînement à partir de zéro, ainsi que préentraînement mono et multi-domaines. Nos expérimentations approfondies dans des domaines spécialisés montrent qu’il est possible d’atteindre des performances compétitives en aval, même lors d’un préentraînement avec le mécanisme d’attention approximatif de LinFormer. Pour une reproductibilité totale, nous publions les modèles et les données de préentraînement, ainsi que les corpus utilisés.
2023
Entity Enhanced Attention Graph-Based Passages Retrieval
Lucas Albarede | Lorraine Goeuriot | Philippe Mulhem | Claude Le Pape-Gardeux | Sylvain Marie | Trinidad Chardin-Segui
Actes de CORIA-TALN 2023. Actes de la 18e Conférence en Recherche d'Information et Applications (CORIA)
Lucas Albarede | Lorraine Goeuriot | Philippe Mulhem | Claude Le Pape-Gardeux | Sylvain Marie | Trinidad Chardin-Segui
Actes de CORIA-TALN 2023. Actes de la 18e Conférence en Recherche d'Information et Applications (CORIA)
Passage retrieval is crucial in specialized domains where documents are long and complex, such as patents, legal documents, scientific reports, etc. We explore in this paper the integration of Entities and passages in Heterogeneous Attention Graph Models dedicated to passage retrieval. We use the two passage retrieval architectures based on re-ranking proposed in [1]. We experiment our proposal on the TREC CAR Y3 Passage Retrieval Task. The results obtained show an improvement over state-of-the-art techniques and proves the effectiveness of the approach. Our experiments also show the importance of using adequate parameters for such approach.
UMLS-KGI-BERT: Data-Centric Knowledge Integration in Transformers for Biomedical Entity Recognition
Aidan Mannion | Didier Schwab | Lorraine Goeuriot
Proceedings of the 5th Clinical Natural Language Processing Workshop
Aidan Mannion | Didier Schwab | Lorraine Goeuriot
Proceedings of the 5th Clinical Natural Language Processing Workshop
Pre-trained transformer language models (LMs) have in recent years become the dominant paradigm in applied NLP. These models have achieved state-of-the-art performance on tasks such as information extraction, question answering, sentiment analysis, document classification and many others. In the biomedical domain, significant progress has been made in adapting this paradigm to NLP tasks that require the integration of domain-specific knowledge as well as statistical modelling of language. In particular, research in this area has focused on the question of how best to construct LMs that take into account not only the patterns of token distribution in medical text, but also the wealth of structured information contained in terminology resources such as the UMLS. This work contributes a data-centric paradigm for enriching the language representations of biomedical transformer-encoder LMs by extracting text sequences from the UMLS.This allows for graph-based learning objectives to be combined with masked-language pre-training. Preliminary results from experiments in the extension of pre-trained LMs as well as training from scratch show that this framework improves downstream performance on multiple biomedical and clinical Named Entity Recognition (NER) tasks. All pre-trained models, data processing pipelines and evaluation scripts will be made publicly available.
Vers l’évaluation continue des systèmes de recherche d’information.
Petra Galuscakova | Romain Deveaud | Gabriela Gonzalez-Saez | Philippe Mulhem | Lorraine Goeuriot | Florina Piroi | Martin Popel
Actes de CORIA-TALN 2023. Actes de la 18e Conférence en Recherche d'Information et Applications (CORIA)
Petra Galuscakova | Romain Deveaud | Gabriela Gonzalez-Saez | Philippe Mulhem | Lorraine Goeuriot | Florina Piroi | Martin Popel
Actes de CORIA-TALN 2023. Actes de la 18e Conférence en Recherche d'Information et Applications (CORIA)
Cet article présente le corpus de données associé à la première campagne évaluation LongEval dans le cadre de CLEF 2023. L’objectif de cette évaluation est d’étudier comment les systèmes de recherche d’informations réagissent à l’évolution des données qu’ils manipulent (notamment les documents et les requêtes). Nous détaillons les objectifs de la tâche, le processus d’acquisition des données et les mesures d’évaluation utilisées.
Augmentation des modèles de langage français par graphes de connaissances pour la reconnaissance des entités biomédicales
Aidan Mannion | Didier Schwab | Lorraine Goeuriot | Thierry Chevalier
Actes de CORIA-TALN 2023. Actes de la 30e Conférence sur le Traitement Automatique des Langues Naturelles (TALN), volume 1 : travaux de recherche originaux -- articles longs
Aidan Mannion | Didier Schwab | Lorraine Goeuriot | Thierry Chevalier
Actes de CORIA-TALN 2023. Actes de la 30e Conférence sur le Traitement Automatique des Langues Naturelles (TALN), volume 1 : travaux de recherche originaux -- articles longs
Des travaux récents dans le domaine du traitement du langage naturel ont démontré l’efficacité des modèles de langage pré-entraînés pour une grande variété d’applications générales. Les modèles de langage à grande échelle acquièrent généralement ces capacités en modélisant la distribution statistique des mots par un apprentissage auto-supervisé sur de grandes quantités de texte. Toutefois, pour les domaines spécialisés à faibles ressources, tels que le traitement de documents cliniques, en particulier dans des langues autres que l’anglais, la nécessité d’intégrer des connaissances structurées reste d’une grande importance. Cet article se concentre sur l’une de ces applications spécialisées de la modélisation du langage à partir de ressources limitées : l’extraction d’informations à partir de documents biomédicaux et cliniques en français. En particulier, nous montrons qu’en complétant le pré-entraînement en mots masqués des réseaux neuronaux transformer par des objectifs de prédiction extraits d’une base de connaissances biomédicales, leurs performances sur deux tâches différentes de reconnaissance d’entités nommées en français peuvent être augmentées.
2021
Identification de profil clinique du patient: Une approche de classification de séquences utilisant des modèles de langage français contextualisés (Identification of patient clinical profiles : A sequence classification approach using contextualised French language models )
Aidan Mannion | Thierry Chevalier | Didier Schwab | Lorraine Goeuriot
Actes de la 28e Conférence sur le Traitement Automatique des Langues Naturelles. Atelier DÉfi Fouille de Textes (DEFT)
Aidan Mannion | Thierry Chevalier | Didier Schwab | Lorraine Goeuriot
Actes de la 28e Conférence sur le Traitement Automatique des Langues Naturelles. Atelier DÉfi Fouille de Textes (DEFT)
Cet article présente un résumé de notre soumission pour Tâche 1 de DEFT 2021. Cette tâche consiste à identifier le profil clinique d’un patient à partir d’une description textuelle de son cas clinique en identifiant les types de pathologie mentionnés dans le texte. Ce travail étudie des approches de classification de texte utilisant des plongements de mots contextualisés en français. À partir d’une base de référence d’un modèle constitué pour la compréhension générale de la langue française, nous utilisons des modèles pré-entraînés avec masked language modelling et affinés à la tâche d’identification, en utilisant un corpus externe de textes cliniques fourni par SOS Médecins, pour développer des ensembles de classifieurs binaires associant les textes cliniques à des catégories de pathologies.
2018
Building Evaluation Datasets for Cultural Microblog Retrieval
Lorraine Goeuriot | Josiane Mothe | Philippe Mulhem | Eric SanJuan
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)
Lorraine Goeuriot | Josiane Mothe | Philippe Mulhem | Eric SanJuan
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)
2016
Building Evaluation Datasets for Consumer-Oriented Information Retrieval
Lorraine Goeuriot | Liadh Kelly | Guido Zuccon | Joao Palotti
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
Lorraine Goeuriot | Liadh Kelly | Guido Zuccon | Joao Palotti
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
Common people often experience difficulties in accessing relevant, correct, accurate and understandable health information online. Developing search techniques that aid these information needs is challenging. In this paper we present the datasets created by CLEF eHealth Lab from 2013-2015 for evaluation of search solutions to support common people finding health information online. Specifically, the CLEF eHealth information retrieval (IR) task of this Lab has provided the research community with benchmarks for evaluating consumer-centered health information retrieval, thus fostering research and development aimed to address this challenging problem. Given consumer queries, the goal of the task is to retrieve relevant documents from the provided collection of web pages. The shared datasets provide a large health web crawl, queries representing people’s real world information needs, and relevance assessment judgements for the queries.
2014
Porting a Summarizer to the French Language
Rémi Bois | Johannes Leveling | Lorraine Goeuriot | Gareth J. F. Jones | Liadh Kelly
Proceedings of TALN 2014 (Volume 2: Short Papers)
Rémi Bois | Johannes Leveling | Lorraine Goeuriot | Gareth J. F. Jones | Liadh Kelly
Proceedings of TALN 2014 (Volume 2: Short Papers)
2009
Compilation of Specialized Comparable Corpora in French and Japanese
Lorraine Goeuriot | Emmanuel Morin | Béatrice Daille
Proceedings of the 2nd Workshop on Building and Using Comparable Corpora: from Parallel to Non-parallel Corpora (BUCC)
Lorraine Goeuriot | Emmanuel Morin | Béatrice Daille
Proceedings of the 2nd Workshop on Building and Using Comparable Corpora: from Parallel to Non-parallel Corpora (BUCC)
2008
Characterization of Scientific and Popular Science Discourse in French, Japanese and Russian
Lorraine Goeuriot | Natalia Grabar | Béatrice Daille
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)
Lorraine Goeuriot | Natalia Grabar | Béatrice Daille
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)
We aim to characterize the comparability of corpora, we address this issue in the trilingual context through the distinction of expert and non expert documents. We work separately with corpora composed of documents from the medical domain in three languages (French, Japanese and Russian) which present an important linguistic distance between them. In our approach, documents are characterized in each language by their topic and by a discursive typology positioned at three levels of document analysis: structural, modal and lexical. The document typology is implemented with two learning algorithms (SVMlight and C4.5). Evaluation of results shows that the proposed discursive typology can be transposed from one language to another, as it indeed allows to distinguish the two aimed discourses (science and popular science). However, we observe that performances vary a lot according to languages, algorithms and types of discursive characteristics.
2007
Caractérisation des discours scientifiques et vulgarisés en français, japonais et russe
Lorraine Goeuriot | Natalia Grabar | Béatrice Daille
Actes de la 14ème conférence sur le Traitement Automatique des Langues Naturelles. Posters
Lorraine Goeuriot | Natalia Grabar | Béatrice Daille
Actes de la 14ème conférence sur le Traitement Automatique des Langues Naturelles. Posters
L’objectif principal de notre travail consiste à étudier la notion de comparabilité des corpus, et nous abordons cette question dans un contexte monolingue en cherchant à distinguer les documents scientifiques et vulgarisés. Nous travaillons séparément sur des corpus composés de documents du domaine médical dans trois langues à forte distance linguistique (le français, le japonais et le russe). Dans notre approche, les documents sont caractérisés dans chaque langue selon leur thématique et une typologie discursive qui se situe à trois niveaux de l’analyse des documents : structurel, modal et lexical. Le typage des documents est implémenté avec deux algorithmes d’apprentissage (SVMlight et C4.5). L’évaluation des résultats montre que la typologie discursive proposée est portable d’une langue à l’autre car elle permet en effet de distinguer les deux discours. Nous constatons néanmoins des performances très variées selon les langues, les algorithmes et les types de caractéristiques discursives.
Search
Fix author
Co-authors
- Didier Schwab 9
- Aidan Mannion 8
- François Portet 5
- Philippe Mulhem 4
- Vincent Segonne 4
- Maximin Coavoux 3
- Béatrice Daille 3
- Marco Dinarelli 3
- Benjamin Lecouteux 3
- Xingyu Liu 3
- Cécile Macaire 3
- Mathilde Aguiar 2
- Laura Cristina Alonzo Canul 2
- Alexandre Daniel Audibert 2
- Pierrette Bouillon 2
- Marie Candito 2
- Arnault Chatelain 2
- Thierry Chevalier 2
- Iris Eshkol 2
- Diandra Fabre 2
- Thomas François 2
- Jérôme Goulian 2
- Natalia Grabar 2
- Qianwen Guan 2
- Nicolas Hervé 2
- Liadh Kelly 2
- Mathieu Lafourcade 2
- Magali Norré 2
- Etienne Ollion 2
- Adrien Pupier 2
- Fabien Ringeval 2
- Vincent Vandeghinste 2
- Yongxin Zhou 2
- Lucas Albarede 1
- Massih R Amini 1
- Massih R. Amini 1
- Alexandre Audibert 1
- Abdelkrim Beloued 1
- Rémi Bois 1
- Alae Bouchiba 1
- Maryem Bouziane 1
- Emile Chapuis 1
- Trinidad Chardin-Segui 1
- Adrian-Gabriel Chifu 1
- Nils Defauw 1
- Romain Deveaud 1
- Emmanuelle Esparança-Rodier 1
- Emmanuelle Esperança-Rodier 1
- Yannick Estève 1
- Solène Evain 1
- Sébastien Fournier 1
- Petra Galuščáková 1
- Mohammed Ghennai 1
- Gabriela Gonzalez-Saez 1
- Shuyue Gu 1
- Felix E. Herron 1
- Felix Herron 1
- Gareth J. F. Jones 1
- Steffen Lalande 1
- Phuong-Hang Le 1
- Claude Le Pape-Gardeux 1
- Johannes Leveling 1
- Sylvain Marie 1
- Salima Mdhaffar 1
- Kirill Milintsevich 1
- Emmanuel Morin 1
- Josiane Mothe 1
- Aurélie Nardy 1
- Stéphane Ohayon 1
- Joao Palotti 1
- Valentin Pelloin 1
- Maxime Peyrard 1
- Florina Piroi 1
- Martin Popel 1
- Solange Rossato 1
- Eric Sanjuan 1
- Gilles Sérasset 1
- Xavier Tannier 1
- Armand Violle 1
- Guido Zuccon 1