Paramjot Kaur
2020
A General Benchmarking Framework for Text Generation
Diego Moussallem
|
Paramjot Kaur
|
Thiago Ferreira
|
Chris van der Lee
|
Anastasia Shimorina
|
Felix Conrads
|
Michael Röder
|
René Speck
|
Claire Gardent
|
Simon Mille
|
Nikolai Ilinykh
|
Axel-Cyrille Ngonga Ngomo
Proceedings of the 3rd International Workshop on Natural Language Generation from the Semantic Web (WebNLG+)
The RDF-to-text task has recently gained substantial attention due to the continuous growth of RDF knowledge graphs in number and size. Recent studies have focused on systematically comparing RDF-to-text approaches on benchmarking datasets such as WebNLG. Although some evaluation tools have already been proposed for text generation, none of the existing solutions abides by the Findability, Accessibility, Interoperability, and Reusability (FAIR) principles and involves RDF data for the knowledge extraction task. In this paper, we present BENG, a FAIR benchmarking platform for Natural Language Generation (NLG) and Knowledge Extraction systems with focus on RDF data. BENG builds upon the successful benchmarking platform GERBIL, is opensource and is publicly available along with the data it contains.
Search