Abstract
We analyze the behaviors of open large language models (LLMs) on the task of data-to-text (D2T) generation, i.e., generating coherent and relevant text from structured data. To avoid the issue of LLM training data contamination with standard benchmarks, we design Quintd - a tool for collecting novel structured data records from public APIs. We find that open LLMs (Llama 2, Mistral, and Zephyr) can generate fluent and coherent texts in zero-shot settings from data in common formats collected with Quintd. However, we show that the semantic accuracy of the outputs is a major issue: both according to human annotators and our reference-free metric based on GPT-4, more than 80% of the outputs of open LLMs contain at least one semantic error. We publicly release the code, data, and model outputs.- Anthology ID:
- 2024.acl-long.651
- Volume:
- Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
- Month:
- August
- Year:
- 2024
- Address:
- Bangkok, Thailand
- Editors:
- Lun-Wei Ku, Andre Martins, Vivek Srikumar
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 12045–12072
- Language:
- URL:
- https://aclanthology.org/2024.acl-long.651
- DOI:
- 10.18653/v1/2024.acl-long.651
- Cite (ACL):
- Zdeněk Kasner and Ondrej Dusek. 2024. Beyond Traditional Benchmarks: Analyzing Behaviors of Open LLMs on Data-to-Text Generation. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12045–12072, Bangkok, Thailand. Association for Computational Linguistics.
- Cite (Informal):
- Beyond Traditional Benchmarks: Analyzing Behaviors of Open LLMs on Data-to-Text Generation (Kasner & Dusek, ACL 2024)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-5/2024.acl-long.651.pdf