Abstract
The paper devoted to the problem of automatic text generation from RDF triples. This problem was formalized and proposed as a part of the 2020 WebNLG challenge. We describe our approach to the RDF-to-text generation task based on a neural network model with the Generative Pre-Training (GPT-2) architecture. In particular, we outline a way of base GPT-2 model conversion to a model with language and classification heads and discuss the text generation methods. To research the parameters’ influence on the end-task performance a series of experiments was carried out. We report the result metrics and conclude with possible improvement directions.- Anthology ID:
- 2020.webnlg-1.17
- Volume:
- Proceedings of the 3rd International Workshop on Natural Language Generation from the Semantic Web (WebNLG+)
- Month:
- 12
- Year:
- 2020
- Address:
- Dublin, Ireland (Virtual)
- Venue:
- WebNLG
- SIG:
- SIGGEN
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 154–158
- Language:
- URL:
- https://aclanthology.org/2020.webnlg-1.17
- DOI:
- Cite (ACL):
- Pavel Blinov. 2020. Semantic Triples Verbalization with Generative Pre-Training Model. In Proceedings of the 3rd International Workshop on Natural Language Generation from the Semantic Web (WebNLG+), pages 154–158, Dublin, Ireland (Virtual). Association for Computational Linguistics.
- Cite (Informal):
- Semantic Triples Verbalization with Generative Pre-Training Model (Blinov, WebNLG 2020)
- PDF:
- https://preview.aclanthology.org/paclic-22-ingestion/2020.webnlg-1.17.pdf
- Code
- blinovpd/ru-rdf2text