2022
pdf
abs
Transfer Learning and Masked Generation for Answer Verbalization
Sebastien Montella
|
Lina Rojas-Barahona
|
Frederic Bechet
|
Johannes Heinecke
|
Alexis Nasr
Proceedings of the Workshop on Structured and Unstructured Knowledge Integration (SUKI)
Structured Knowledge has recently emerged as an essential component to support fine-grained Question Answering (QA). In general, QA systems query a Knowledge Base (KB) to detect and extract the raw answers as final prediction. However, as lacking of context, language generation can offer a much informative and complete response. In this paper, we propose to combine the power of transfer learning and the advantage of entity placeholders to produce high-quality verbalization of extracted answers from a KB. We claim that such approach is especially well-suited for answer generation. Our experiments show 44.25%, 3.26% and 29.10% relative gain in BLEU over the state-of-the-art on the VQuAnDA, ParaQA and VANiLLa datasets, respectively. We additionally provide minor hallucinations corrections in VANiLLa standing for 5% of each of the training and testing set. We witness a median absolute gain of 0.81 SacreBLEU. This strengthens the importance of data quality when using automated evaluation.
2021
pdf
Hyperbolic Temporal Knowledge Graph Embeddings with Relational and Time Curvatures
Sebastien Montella
|
Lina M. Rojas Barahona
|
Johannes Heinecke
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
2020
pdf
abs
Denoising Pre-Training and Data Augmentation Strategies for Enhanced RDF Verbalization with Transformers
Sebastien Montella
|
Betty Fabre
|
Tanguy Urvoy
|
Johannes Heinecke
|
Lina Rojas-Barahona
Proceedings of the 3rd International Workshop on Natural Language Generation from the Semantic Web (WebNLG+)
The task of verbalization of RDF triples has known a growth in popularity due to the rising ubiquity of Knowledge Bases (KBs). The formalism of RDF triples is a simple and efficient way to store facts at a large scale. However, its abstract representation makes it difficult for humans to interpret. For this purpose, the WebNLG challenge aims at promoting automated RDF-to-text generation. We propose to leverage pre-trainings from augmented data with the Transformer model using a data augmentation strategy. Our experiment results show a minimum relative increases of 3.73%, 126.05% and 88.16% in BLEU score for seen categories, unseen entities and unseen categories respectively over the standard training.