Lina M. Rojas Barahona

Also published as: Lina M. Rojas-Barahona, Lina Maria Rojas-Barahona, Lina Rojas, Lina Rojas-Barahona


2024

pdf
KGConv, a Conversational Corpus Grounded in Wikidata
Quentin Brabant | Lina M. Rojas Barahona | Gwénolé Lecorvé | Claire Gardent
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

We present KGConv, a large corpus of 71k English conversations where each question-answer pair is grounded in a Wikidata fact. Conversations contain on average 8.6 questions and for each Wikidata fact, we provide multiple variants (12 on average) of the corresponding question using templates, human annotations, hand-crafted rules and a question rewriting neural model. We provide baselines for the task of Knowledge-Based, Conversational Question Generation. KGConv can further be used for other generation and analysis tasks such as single-turn question generation from Wikidata triples, question rewriting, question answering from conversation or from knowledge graphs and quiz generation.

2023

pdf
Few-Shot Structured Policy Learning for Multi-Domain and Multi-Task Dialogues
Thibault Cordier | Tanguy Urvoy | Fabrice Lefèvre | Lina M. Rojas Barahona
Findings of the Association for Computational Linguistics: EACL 2023

Reinforcement learning has been widely adopted to model dialogue managers in task-oriented dialogues. However, the user simulator provided by state-of-the-art dialogue frameworks are only rough approximations of human behaviour. The ability to learn from a small number of human interactions is hence crucial, especially on multi-domain and multi-task environments where the action space is large. We therefore propose to use structured policies to improve sample efficiency when learning on these kinds of environments. We also evaluate the impact of learning from human vs simulated experts. Among the different levels of structure that we tested, the graph neural networks (GNNs) show a remarkable superiority by reaching a success rate above 80% with only 50 dialogues when learning from simulated experts. They also show superiority when learning from human experts, although a performance drop was observed. We therefore suggest to concentrate future research efforts on bridging the gap between human data, simulators and automatic evaluators in dialogue frameworks.

pdf
Investigating the Effect of Relative Positional Embeddings on AMR-to-Text Generation with Structural Adapters
Sebastien Montella | Alexis Nasr | Johannes Heinecke | Frederic Bechet | Lina M. Rojas Barahona
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics

Text generation from Abstract Meaning Representation (AMR) has substantially benefited from the popularized Pretrained Language Models (PLMs). Myriad approaches have linearized the input graph as a sequence of tokens to fit the PLM tokenization requirements. Nevertheless, this transformation jeopardizes the structural integrity of the graph and is therefore detrimental to its resulting representation. To overcome this issue, Ribeiro et al. (2021b) have recently proposed StructAdapt, a structure-aware adapter which injects the input graph connectivity within PLMs using Graph Neural Networks (GNNs). In this paper, we investigate the influence of Relative Position Embeddings (RPE) on AMR-to-Text, and, in parallel, we examine the robustness of StructAdapt. Through ablation studies, graph attack and link prediction, we reveal that RPE might be partially encoding input graphs. We suggest further research regarding the role of RPE will provide valuable insights for Graph-to-Text generation.

2022

pdf
Graph Neural Network Policies and Imitation Learning for Multi-Domain Task-Oriented Dialogues
Thibault Cordier | Tanguy Urvoy | Fabrice Lefèvre | Lina M. Rojas Barahona
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue

Task-oriented dialogue systems are designed to achieve specific goals while conversing with humans. In practice, they may have to handle simultaneously several domains and tasks. The dialogue manager must therefore be able to take into account domain changes and plan over different domains/tasks in order to deal with multi-domain dialogues. However, learning with reinforcement in such context becomes difficult because the state-action dimension is larger while the reward signal remains scarce. Our experimental results suggest that structured policies based on graph neural networks combined with different degrees of imitation learning can effectively handle multi-domain dialogues. The reported experiments underline the benefit of structured policies over standard policies.

pdf
“Do you follow me?”: A Survey of Recent Approaches in Dialogue State Tracking
Léo Jacqmin | Lina M. Rojas Barahona | Benoit Favre
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue

While communicating with a user, a task-oriented dialogue system has to track the user’s needs at each turn according to the conversation history. This process called dialogue state tracking (DST) is crucial because it directly informs the downstream dialogue policy. DST has received a lot of interest in recent years with the text-to-text paradigm emerging as the favored approach. In this review paper, we first present the task and its associated datasets. Then, considering a large number of recent publications, we identify highlights and advances of research in 2021-2022. Although neural approaches have enabled significant progress, we argue that some critical aspects of dialogue systems such as generalizability are still underexplored. To motivate future studies, we propose several research avenues.

pdf
CoQAR: Question Rewriting on CoQA
Quentin Brabant | Gwénolé Lecorvé | Lina M. Rojas Barahona
Proceedings of the Thirteenth Language Resources and Evaluation Conference

Questions asked by humans during a conversation often contain contextual dependencies, i.e., explicit or implicit references to previous dialogue turns. These dependencies take the form of coreferences (e.g., via pronoun use) or ellipses, and can make the understanding difficult for automated systems. One way to facilitate the understanding and subsequent treatments of a question is to rewrite it into an out-of-context form, i.e., a form that can be understood without the conversational context. We propose CoQAR, a corpus containing 4.5K conversations from the Conversational Question-Answering dataset CoQA, for a total of 53K follow-up question-answer pairs. Each original question was manually annotated with at least 2 at most 3 out-of-context rewritings. CoQA originally contains 8k conversations, which sum up to 127k question-answer pairs. CoQAR can be used in the supervised learning of three tasks: question paraphrasing, question rewriting and conversational question answering. In order to assess the quality of CoQAR’s rewritings, we conduct several experiments consisting in training and evaluating models for these three tasks. Our results support the idea that question rewriting can be used as a preprocessing step for (conversational and non-conversational) question answering models, thereby increasing their performances.

pdf
SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications
Gwénolé Lecorvé | Morgan Veyret | Quentin Brabant | Lina M. Rojas Barahona
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

This paper focuses on the generation of natural language questions based on SPARQL queries, with an emphasis on conversational use cases (follow-up question-answering). It studies what can be achieved so far based on current deep learning models (namely pretrained T5 and BART models). To do so, 4 knowledge-based QA corpora have been homogenized for the task and a new challenge set is introduced. A first series of experiments analyzes the impact of different training setups, while a second series seeks to understand what is still difficult for these models. The results from automatic metrics and human evaluation show that simple questions and frequent templates of SPARQL queries are usually well processed whereas complex questions and conversational dimensions (coreferences and ellipses) are still difficult to handle. The experimental material is publicly available on https://github.com/Orange-OpenSource/sparql-to-text .

pdf
Transfer Learning and Masked Generation for Answer Verbalization
Sebastien Montella | Lina Rojas-Barahona | Frederic Bechet | Johannes Heinecke | Alexis Nasr
Proceedings of the Workshop on Structured and Unstructured Knowledge Integration (SUKI)

Structured Knowledge has recently emerged as an essential component to support fine-grained Question Answering (QA). In general, QA systems query a Knowledge Base (KB) to detect and extract the raw answers as final prediction. However, as lacking of context, language generation can offer a much informative and complete response. In this paper, we propose to combine the power of transfer learning and the advantage of entity placeholders to produce high-quality verbalization of extracted answers from a KB. We claim that such approach is especially well-suited for answer generation. Our experiments show 44.25%, 3.26% and 29.10% relative gain in BLEU over the state-of-the-art on the VQuAnDA, ParaQA and VANiLLa datasets, respectively. We additionally provide minor hallucinations corrections in VANiLLa standing for 5% of each of the training and testing set. We witness a median absolute gain of 0.81 SacreBLEU. This strengthens the importance of data quality when using automated evaluation.

2021

pdf
Hyperbolic Temporal Knowledge Graph Embeddings with Relational and Time Curvatures
Sebastien Montella | Lina M. Rojas Barahona | Johannes Heinecke
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

2020

pdf
Denoising Pre-Training and Data Augmentation Strategies for Enhanced RDF Verbalization with Transformers
Sebastien Montella | Betty Fabre | Tanguy Urvoy | Johannes Heinecke | Lina Rojas-Barahona
Proceedings of the 3rd International Workshop on Natural Language Generation from the Semantic Web (WebNLG+)

The task of verbalization of RDF triples has known a growth in popularity due to the rising ubiquity of Knowledge Bases (KBs). The formalism of RDF triples is a simple and efficient way to store facts at a large scale. However, its abstract representation makes it difficult for humans to interpret. For this purpose, the WebNLG challenge aims at promoting automated RDF-to-text generation. We propose to leverage pre-trainings from augmented data with the Transformer model using a data augmentation strategy. Our experiment results show a minimum relative increases of 3.73%, 126.05% and 88.16% in BLEU score for seen categories, unseen entities and unseen categories respectively over the standard training.

2019

pdf
Spoken Conversational Search for General Knowledge
Lina M. Rojas Barahona | Pascal Bellec | Benoit Besset | Martinho Dossantos | Johannes Heinecke | Munshi Asadullah | Olivier Leblouch | Jeanyves. Lancien | Geraldine Damnati | Emmanuel Mory | Frederic Herledan
Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue

We present a spoken conversational question answering proof of concept that is able to answer questions about general knowledge from Wikidata. The dialogue agent does not only orchestrate various agents but also solve coreferences and ellipsis.

pdf
Graph2Bots, Unsupervised Assistance for Designing Chatbots
Jean-Leon Bouraoui | Sonia Le Meitour | Romain Carbou | Lina M. Rojas Barahona | Vincent Lemaire
Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue

We present Graph2Bots, a tool for assisting conversational agent designers. It extracts a graph representation from human-human conversations by using unsupervised learning. The generated graph contains the main stages of the dialogue and their inner transitions. The graphical user interface (GUI) then allows graph editing.

2018

pdf
Feudal Reinforcement Learning for Dialogue Management in Large Domains
Iñigo Casanueva | Paweł Budzianowski | Pei-Hao Su | Stefan Ultes | Lina M. Rojas-Barahona | Bo-Hsiang Tseng | Milica Gašić
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)

Reinforcement learning (RL) is a promising approach to solve dialogue policy optimisation. Traditional RL algorithms, however, fail to scale to large domains due to the curse of dimensionality. We propose a novel Dialogue Management architecture, based on Feudal RL, which decomposes the decision into two steps; a first step where a master policy selects a subset of primitive actions, and a second step where a primitive action is chosen from the selected subset. The structural information included in the domain ontology is used to abstract the dialogue state space, taking the decisions at each step using different parts of the abstracted state. This, combined with an information sharing mechanism between slots, increases the scalability to large domains. We show that an implementation of this approach, based on Deep-Q Networks, significantly outperforms previous state of the art in several dialogue domains and environments, without the need of any additional reward signal.

pdf
Addressing Objects and Their Relations: The Conversational Entity Dialogue Model
Stefan Ultes | Paweł Budzianowski | Iñigo Casanueva | Lina M. Rojas-Barahona | Bo-Hsiang Tseng | Yen-Chen Wu | Steve Young | Milica Gašić
Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue

Statistical spoken dialogue systems usually rely on a single- or multi-domain dialogue model that is restricted in its capabilities of modelling complex dialogue structures, e.g., relations. In this work, we propose a novel dialogue model that is centred around entities and is able to model relations as well as multiple entities of the same type. We demonstrate in a prototype implementation benefits of relation modelling on the dialogue level and show that a trained policy using these relations outperforms the multi-domain baseline. Furthermore, we show that by modelling the relations on the dialogue level, the system is capable of processing relations present in the user input and even learns to address them in the system response.

pdf
Deep learning for language understanding of mental health concepts derived from Cognitive Behavioural Therapy
Lina M. Rojas-Barahona | Bo-Hsiang Tseng | Yinpei Dai | Clare Mansfield | Osman Ramadan | Stefan Ultes | Michael Crawford | Milica Gašić
Proceedings of the Ninth International Workshop on Health Text Mining and Information Analysis

In recent years, we have seen deep learning and distributed representations of words and sentences make impact on a number of natural language processing tasks, such as similarity, entailment and sentiment analysis. Here we introduce a new task: understanding of mental health concepts derived from Cognitive Behavioural Therapy (CBT). We define a mental health ontology based on the CBT principles, annotate a large corpus where this phenomena is exhibited and perform understanding using deep learning and distributed representations. Our results show that the performance of deep learning models combined with word embeddings or sentence embeddings significantly outperform non-deep-learning models in this difficult task. This understanding module will be an essential component of a statistical dialogue system delivering therapy.

2017

pdf
A Network-based End-to-End Trainable Task-oriented Dialogue System
Tsung-Hsien Wen | David Vandyke | Nikola Mrkšić | Milica Gašić | Lina M. Rojas-Barahona | Pei-Hao Su | Stefan Ultes | Steve Young
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers

Teaching machines to accomplish tasks by conversing naturally with humans is challenging. Currently, developing task-oriented dialogue systems requires creating multiple components and typically this involves either a large amount of handcrafting, or acquiring costly labelled datasets to solve a statistical learning problem for each component. In this work we introduce a neural network-based text-in, text-out end-to-end trainable goal-oriented dialogue system along with a new way of collecting dialogue data based on a novel pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue systems easily and without making too many assumptions about the task at hand. The results show that the model can converse with human subjects naturally whilst helping them to accomplish tasks in a restaurant search domain.

pdf
Reward-Balancing for Statistical Spoken Dialogue Systems using Multi-objective Reinforcement Learning
Stefan Ultes | Paweł Budzianowski | Iñigo Casanueva | Nikola Mrkšić | Lina M. Rojas-Barahona | Pei-Hao Su | Tsung-Hsien Wen | Milica Gašić | Steve Young
Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue

Reinforcement learning is widely used for dialogue policy optimization where the reward function often consists of more than one component, e.g., the dialogue success and the dialogue length. In this work, we propose a structured method for finding a good balance between these components by searching for the optimal reward component weighting. To render this search feasible, we use multi-objective reinforcement learning to significantly reduce the number of training dialogues required. We apply our proposed method to find optimized component weights for six domains and compare them to a default baseline.

pdf
Sub-domain Modelling for Dialogue Management with Hierarchical Reinforcement Learning
Paweł Budzianowski | Stefan Ultes | Pei-Hao Su | Nikola Mrkšić | Tsung-Hsien Wen | Iñigo Casanueva | Lina M. Rojas-Barahona | Milica Gašić
Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue

Human conversation is inherently complex, often spanning many different topics/domains. This makes policy learning for dialogue systems very challenging. Standard flat reinforcement learning methods do not provide an efficient framework for modelling such dialogues. In this paper, we focus on the under-explored problem of multi-domain dialogue management. First, we propose a new method for hierarchical reinforcement learning using the option framework. Next, we show that the proposed architecture learns faster and arrives at a better policy than the existing flat ones do. Moreover, we show how pretrained policies can be adapted to more complex systems with an additional set of new actions. In doing that, we show that our approach has the potential to facilitate policy optimisation for more sophisticated multi-domain dialogue systems.

pdf
DialPort, Gone Live: An Update After A Year of Development
Kyusong Lee | Tiancheng Zhao | Yulun Du | Edward Cai | Allen Lu | Eli Pincus | David Traum | Stefan Ultes | Lina M. Rojas-Barahona | Milica Gasic | Steve Young | Maxine Eskenazi
Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue

DialPort collects user data for connected spoken dialog systems. At present six systems are linked to a central portal that directs the user to the applicable system and suggests systems that the user may be interested in. User data has started to flow into the system.

pdf
PyDial: A Multi-domain Statistical Dialogue System Toolkit
Stefan Ultes | Lina M. Rojas-Barahona | Pei-Hao Su | David Vandyke | Dongho Kim | Iñigo Casanueva | Paweł Budzianowski | Nikola Mrkšić | Tsung-Hsien Wen | Milica Gašić | Steve Young
Proceedings of ACL 2017, System Demonstrations

2016

pdf
Conditional Generation and Snapshot Learning in Neural Dialogue Systems
Tsung-Hsien Wen | Milica Gašić | Nikola Mrkšić | Lina M. Rojas-Barahona | Pei-Hao Su | Stefan Ultes | David Vandyke | Steve Young
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf
Multi-domain Neural Network Language Generation for Spoken Dialogue Systems
Tsung-Hsien Wen | Milica Gašić | Nikola Mrkšić | Lina M. Rojas-Barahona | Pei-Hao Su | David Vandyke | Steve Young
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf
Counter-fitting Word Vectors to Linguistic Constraints
Nikola Mrkšić | Diarmuid Ó Séaghdha | Blaise Thomson | Milica Gašić | Lina M. Rojas-Barahona | Pei-Hao Su | David Vandyke | Tsung-Hsien Wen | Steve Young
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf
Exploiting Sentence and Context Representations in Deep Neural Models for Spoken Language Understanding
Lina M. Rojas-Barahona | Milica Gašić | Nikola Mrkšić | Pei-Hao Su | Stefan Ultes | Tsung-Hsien Wen | Steve Young
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

This paper presents a deep learning architecture for the semantic decoder component of a Statistical Spoken Dialogue System. In a slot-filling dialogue, the semantic decoder predicts the dialogue act and a set of slot-value pairs from a set of n-best hypotheses returned by the Automatic Speech Recognition. Most current models for spoken language understanding assume (i) word-aligned semantic annotations as in sequence taggers and (ii) delexicalisation, or a mapping of input words to domain-specific concepts using heuristics that try to capture morphological variation but that do not scale to other domains nor to language variation (e.g., morphology, synonyms, paraphrasing ). In this work the semantic decoder is trained using unaligned semantic annotations and it uses distributed semantic representation learning to overcome the limitations of explicit delexicalisation. The proposed architecture uses a convolutional neural network for the sentence representation and a long-short term memory network for the context representation. Results are presented for the publicly available DSTC2 corpus and an In-car corpus which is similar to DSTC2 but has a significantly higher word error rate (WER).

pdf
On-line Active Reward Learning for Policy Optimisation in Spoken Dialogue Systems
Pei-Hao Su | Milica Gašić | Nikola Mrkšić | Lina M. Rojas-Barahona | Stefan Ultes | David Vandyke | Tsung-Hsien Wen | Steve Young
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2013

pdf
Using Paraphrases and Lexical Semantics to Improve the Accuracy and the Robustness of Supervised Models in Situated Dialogue Systems
Claire Gardent | Lina M. Rojas Barahona
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf
Unsupervised structured semantic inference for spoken dialog reservation tasks
Alejandra Lorenzo | Lina Rojas-Barahona | Christophe Cerisara
Proceedings of the SIGDIAL 2013 Conference

pdf
Weakly and Strongly Constrained Dialogues for Language Learning
Claire Gardent | Alejandra Lorenzo | Laura Perez-Beltrachini | Lina Rojas-Barahona
Proceedings of the SIGDIAL 2013 Conference

2012

pdf
Robustesse et portabilités multilingue et multi-domaines des systèmes de compréhension de la parole : les corpus du projet PortMedia (Robustness and portability of spoken language understanding systems among languages and domains : the PORTMEDIA project) [in French]
Fabrice Lefèvre | Djamel Mostefa | Laurent Besacier | Yannick Estève | Matthieu Quignard | Nathalie Camelin | Benoit Favre | Bassam Jabaian | Lina Rojas-Barahona
Proceedings of the Joint Conference JEP-TALN-RECITAL 2012, volume 1: JEP

pdf bib
An End-to-End Evaluation of Two Situated Dialog Systems
Lina M. Rojas-Barahona | Alejandra Lorenzo | Claire Gardent
Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue

pdf
Building and Exploiting a Corpus of Dialog Interactions between French Speaking Virtual and Human Agents
Lina M. Rojas-Barahona | Alejandra Lorenzo | Claire Gardent
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

We describe the acquisition of a dialog corpus for French based on multi-task human-machine interactions in a serious game setting. We present a tool for data collection that is configurable for multiple games; describe the data collected using this tool and the annotation schema used to annotate it; and report on the results obtained when training a classifier on the annotated data to associate each player turn with a dialog move usable by a rule based dialog manager. The collected data consists of approximately 1250 dialogs, 10454 utterances and 168509 words and will be made freely available to academic and nonprofit research.

pdf
Leveraging study of robustness and portability of spoken language understanding systems across languages and domains: the PORTMEDIA corpora
Fabrice Lefèvre | Djamel Mostefa | Laurent Besacier | Yannick Estève | Matthieu Quignard | Nathalie Camelin | Benoit Favre | Bassam Jabaian | Lina M. Rojas-Barahona
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

The PORTMEDIA project is intended to develop new corpora for the evaluation of spoken language understanding systems. The newly collected data are in the field of human-machine dialogue systems for tourist information in French in line with the MEDIA corpus. Transcriptions and semantic annotations, obtained by low-cost procedures, are provided to allow a thorough evaluation of the systems' capabilities in terms of robustness and portability across languages and domains. A new test set with some adaptation data is prepared for each case: in Italian as an example of a new language, for ticket reservation as an example of a new domain. Finally the work is complemented by the proposition of a new high level semantic annotation scheme well-suited to dialogue data.

2011

pdf
Using MMIL for the High Level Semantic Annotation of the French MEDIA Dialogue Corpus
Lina Maria Rojas-Barahona | Thierry Bazillon | Matthieu Quignard | Fabrice Lefevre
Proceedings of the Ninth International Conference on Computational Semantics (IWCS 2011)

pdf
An Incremental Architecture for the Semantic Annotation of Dialogue Corpora with High-Level Structures. A case of study for the MEDIA corpus.
Lina Maria Rojas-Barahona | Matthieu Quignard
Proceedings of the SIGDIAL 2011 Conference

2007

pdf
AdaRTE: An Extensible and Adaptable Architecture for Dialog Systems
Lina Rojas | Toni Giorgino
Proceedings of the Workshop on Bridging the Gap: Academic and Industrial Research in Dialog Technologies