Abstract
This paper describes the submission by the NILC Computational Linguistics research group of the University of São Paulo/Brazil to the RDF-to-Text task for English at the WebNLG+ challenge. The success of the current pretrained models like BERT or GPT-2 in text-to-text generation tasks is well-known, however, its application/success on data-totext generation has not been well-studied and proven. This way, we explore how good a pretrained model, in particular BART, performs on the data-to-text generation task. The results obtained were worse than the baseline and other systems in almost all automatic measures. However, the human evaluation shows better results for our system. Besides, results suggest that BART may generate paraphrases of reference texts.- Anthology ID:
- 2020.webnlg-1.14
- Volume:
- Proceedings of the 3rd International Workshop on Natural Language Generation from the Semantic Web (WebNLG+)
- Month:
- 12
- Year:
- 2020
- Address:
- Dublin, Ireland (Virtual)
- Editors:
- Thiago Castro Ferreira, Claire Gardent, Nikolai Ilinykh, Chris van der Lee, Simon Mille, Diego Moussallem, Anastasia Shimorina
- Venue:
- WebNLG
- SIG:
- SIGGEN
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 131–136
- Language:
- URL:
- https://aclanthology.org/2020.webnlg-1.14
- DOI:
- Cite (ACL):
- Marco Antonio Sobrevilla Cabezudo and Thiago A. S. Pardo. 2020. NILC at WebNLG+: Pretrained Sequence-to-Sequence Models on RDF-to-Text Generation. In Proceedings of the 3rd International Workshop on Natural Language Generation from the Semantic Web (WebNLG+), pages 131–136, Dublin, Ireland (Virtual). Association for Computational Linguistics.
- Cite (Informal):
- NILC at WebNLG+: Pretrained Sequence-to-Sequence Models on RDF-to-Text Generation (Sobrevilla Cabezudo & Pardo, WebNLG 2020)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2020.webnlg-1.14.pdf